Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling ErrorProne and NullAway with initial fixes #1248

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

enabling ErrorProne and NullAway with initial fixes

d78594a
Select commit
Loading
Failed to load commit list.
Open

Enabling ErrorProne and NullAway with initial fixes #1248

enabling ErrorProne and NullAway with initial fixes
d78594a
Select commit
Loading
Failed to load commit list.
Google Cloud Build / fhir-data-pipes-pr (cloud-build-fhir) failed Nov 19, 2024 in 53m 29s

Summary

Build Information

Trigger fhir-data-pipes-pr
Build 1a7513b1-541c-43db-a413-854db3488d59
Start 2024-11-19T11:37:56-08:00
Duration 52m46.692s
Status FAILURE

Steps

Step Status Duration
Launch HAPI Source Server SUCCESS 52.733s
Launch Sink Server Search SUCCESS 49.452s
Launch Sink Server JDBC SUCCESS 49.571s
Wait for the initial Servers Start SUCCESS 1m6.107s
Compile Bunsen and Pipeline SUCCESS 6m51.955s
Build Uploader Image SUCCESS 22.178s
Run Uploader Unit Tests SUCCESS 1.107s
Build E2E Image SUCCESS 2m43.64s
Upload to HAPI SUCCESS 1m24.107s
Build Pipeline Images SUCCESS 28.668s
Run Batch Pipeline in FHIR-search mode with HAPI source SUCCESS 20m36.735s
Run E2E Test for FHIR-search mode with HAPI source SUCCESS 7.177s
Run Batch Pipeline for JDBC mode with HAPI source SUCCESS 20m46.023s
Run E2E Test for JDBC mode with HAPI source SUCCESS 7.508s
Run Batch Pipeline for BULK_EXPORT mode with HAPI source SUCCESS 24m8.49s
Run E2E Test for BULK_EXPORT mode with HAPI source FAILURE 7.566s
Turn down FHIR Sink Server Search SUCCESS 4.004s
Turn down FHIR Sink Server JDBC SUCCESS 17.71s
Create views database SUCCESS 796ms
Launch HAPI FHIR Sink Server Controller SUCCESS 4.159s
Bring up controller and Spark containers SUCCESS 11m50.003s
Run E2E Test for Dockerized Controller and Spark Thriftserver SUCCESS 2m27.502s
Bring down controller and Spark containers SUCCESS 25.013s
Turn down HAPI Source Server SUCCESS 2.469s
Turn down FHIR Sink Server Controller for e2e tests SUCCESS 3.221s
Launch OpenMRS Server and HAPI FHIR Sink Server for OpenMRS QUEUED 0s
Wait for Servers Start QUEUED 0s
Launch Streaming Pipeline QUEUED 0s
Run E2E Test for STREAMING, using OpenMRS Source QUEUED 0s
Upload to OpenMRS QUEUED 0s
Run Batch Pipeline FHIR-search mode with OpenMRS source QUEUED 0s
Run E2E Test for FHIR-search mode with OpenMRS source QUEUED 0s
Run Batch Pipeline for JDBC mode with OpenMRS source QUEUED 0s
Run E2E Test for JDBC mode with OpenMRS source QUEUED 0s
Test Indicators QUEUED 0s
Turn down Webserver and HAPI Server QUEUED 0s

Details

starting build "1a7513b1-541c-43db-a413-854db3488d59"

FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: 	git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: 	git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://github.com/google/fhir-data-pipes
 * branch            d78594a247454e3f464109e93b9ec1181e91194f -> FETCH_HEAD
Updating files:  48% (479/979)
Updating files:  49% (480/979)
Updating files:  50% (490/979)
Updating files:  51% (500/979)
Updating files:  52% (510/979)
Updating files:  53% (519/979)
Updating files:  54% (529/979)
Updating files:  54% (537/979)
Updating files:  55% (539/979)
Updating files:  56% (549/979)
Updating files:  57% (559/979)
Updating files:  58% (568/979)
Updating files:  59% (578/979)
Updating files:  60% (588/979)
Updating files:  61% (598/979)
Updating files:  62% (607/979)
Updating files:  63% (617/979)
Updating files:  64% (627/979)
Updating files:  65% (637/979)
Updating files:  66% (647/979)
Updating files:  67% (656/979)
Updating files:  68% (666/979)
Updating files:  69% (676/979)
Updating files:  70% (686/979)
Updating files:  71% (696/979)
Updating files:  72% (705/979)
Updating files:  73% (715/979)
Updating files:  74% (725/979)
Updating files:  75% (735/979)
Updating files:  76% (745/979)
Updating files:  77% (754/979)
Updating files:  78% (764/979)
Updating files:  79% (774/979)
Updating files:  80% (784/979)
Updating files:  81% (793/979)
Updating files:  82% (803/979)
Updating files:  83% (813/979)
Updating files:  84% (823/979)
Updating files:  85% (833/979)
Updating files:  86% (842/979)
Updating files:  87% (852/979)
Updating files:  88% (862/979)
Updating files:  89% (872/979)
Updating files:  90% (882/979)
Updating files:  91% (891/979)
Updating files:  92% (901/979)
Updating files:  93% (911/979)
Updating files:  94% (921/979)
Updating files:  94% (928/979)
Updating files:  95% (931/979)
Updating files:  96% (940/979)
Updating files:  97% (950/979)
Updating files:  98% (960/979)
Updating files:  99% (970/979)
Updating files: 100% (979/979)
Updating files: 100% (979/979), done.
HEAD is now at d78594a enabling ErrorProne and NullAway with initial fixes
BUILD
Starting Step #5 - "Build Uploader Image"
Starting Step #2 - "Launch Sink Server JDBC"
Starting Step #7 - "Build E2E Image"
Starting Step #0 - "Launch HAPI Source Server"
Starting Step #4 - "Compile Bunsen and Pipeline"
Starting Step #1 - "Launch Sink Server Search"
Step #2 - "Launch Sink Server JDBC": Pulling image: docker/compose
Step #0 - "Launch HAPI Source Server": Pulling image: docker/compose
Step #4 - "Compile Bunsen and Pipeline": Pulling image: maven:3.8.5-openjdk-17
Step #1 - "Launch Sink Server Search": Pulling image: docker/compose
Step #7 - "Build E2E Image": Already have image (with digest): gcr.io/cloud-builders/docker
Step #5 - "Build Uploader Image": Already have image (with digest): gcr.io/cloud-builders/docker
Step #0 - "Launch HAPI Source Server": Using default tag: latest
Step #2 - "Launch Sink Server JDBC": Using default tag: latest
Step #1 - "Launch Sink Server Search": Using default tag: latest
Step #5 - "Build Uploader Image": Sending build context to Docker daemon  1.466MB

Step #5 - "Build Uploader Image": Step 1/10 : FROM python:3.7-slim
Step #7 - "Build E2E Image": Sending build context to Docker daemon  66.43MB

Step #7 - "Build E2E Image": Step 1/14 : FROM maven:3.8.7-eclipse-temurin-17-focal
Step #5 - "Build Uploader Image": 3.7-slim: Pulling from library/python
Step #2 - "Launch Sink Server JDBC": latest: Pulling from docker/compose
Step #2 - "Launch Sink Server JDBC": aad63a933944: Pulling fs layer
Step #2 - "Launch Sink Server JDBC": b396cd7cbac4: Pulling fs layer
Step #2 - "Launch Sink Server JDBC": 0426ec0ed60a: Pulling fs layer
Step #2 - "Launch Sink Server JDBC": 9ac2a98ece5b: Pulling fs layer
Step #2 - "Launch Sink Server JDBC": 9ac2a98ece5b: Waiting
Step #4 - "Compile Bunsen and Pipeline": 3.8.5-openjdk-17: Pulling from library/maven
Step #0 - "Launch HAPI Source Server": latest: Pulling from docker/compose
Step #0 - "Launch HAPI Source Server": aad63a933944: Pulling fs layer
Step #0 - "Launch HAPI Source Server": b396cd7cbac4: Pulling fs layer
Step #0 - "Launch HAPI Source Server": 0426ec0ed60a: Pulling fs layer
Step #0 - "Launch HAPI Source Server": 9ac2a98ece5b: Pulling fs layer
Step #0 - "Launch HAPI Source Server": 9ac2a98ece5b: Waiting
Step #0 - "Launch HAPI Source Server": b396cd7cbac4: Verifying Checksum
Step #0 - "Launch HAPI Source Server": b396cd7cbac4: Download complete
Step #2 - "Launch Sink Server JDBC": b396cd7cbac4: Verifying Checksum
Step #2 - "Launch Sink Server JDBC": b396cd7cbac4: Download complete
Step #7 - "Build E2E Image": 3.8.7-eclipse-temurin-17-focal: Pulling from library/maven
Step #5 - "Build Uploader Image": a803e7c4b030: Pulling fs layer
Step #5 - "Build Uploader Image": bf3336e84c8e: Pulling fs layer
Step #5 - "Build Uploader Image": 8973eb85275f: Pulling fs layer
Step #5 - "Build Uploader Image": f9afc3cc0135: Pulling fs layer
Step #5 - "Build Uploader Image": 39312d8b4ab7: Pulling fs layer
Step #5 - "Build Uploader Image": bf3336e84c8e: Waiting
Step #5 - "Build Uploader Image": 39312d8b4ab7: Waiting
Step #5 - "Build Uploader Image": a803e7c4b030: Waiting
Step #5 - "Build Uploader Image": 8973eb85275f: Waiting
Step #5 - "Build Uploader Image": f9afc3cc0135: Waiting
Step #1 - "Launch Sink Server Search": latest: Pulling from docker/compose
Step #1 - "Launch Sink Server Search": aad63a933944: Pulling fs layer
Step #1 - "Launch Sink Server Search": b396cd7cbac4: Pulling fs layer
Step #1 - "Launch Sink Server Search": 0426ec0ed60a: Pulling fs layer
Step #1 - "Launch Sink Server Search": 9ac2a98ece5b: Pulling fs layer
Step #1 - "Launch Sink Server Search": 9ac2a98ece5b: Waiting
Step #1 - "Launch Sink Server Search": b396cd7cbac4: Download complete
Step #2 - "Launch Sink Server JDBC": aad63a933944: Verifying Checksum
Step #2 - "Launch Sink Server JDBC": aad63a933944: Download complete
Step #1 - "Launch Sink Server Search": aad63a933944: Verifying Checksum
Step #1 - "Launch Sink Server Search": aad63a933944: Download complete
Step #0 - "Launch HAPI Source Server": aad63a933944: Verifying Checksum
Step #0 - "Launch HAPI Source Server": aad63a933944: Download complete
Step #0 - "Launch HAPI Source Server": aad63a933944: Pull complete
Step #2 - "Launch Sink Server JDBC": aad63a933944: Pull complete
Step #1 - "Launch Sink Server Search": aad63a933944: Pull complete
Step #2 - "Launch Sink Server JDBC": b396cd7cbac4: Pull complete
Step #1 - "Launch Sink Server Search": b396cd7cbac4: Pull complete
Step #0 - "Launch HAPI Source Server": b396cd7cbac4: Pull complete
Step #1 - "Launch Sink Server Search": 0426ec0ed60a: Verifying Checksum
Step #1 - "Launch Sink Server Search": 0426ec0ed60a: Download complete
Step #0 - "Launch HAPI Source Server": 0426ec0ed60a: Verifying Checksum
Step #0 - "Launch HAPI Source Server": 0426ec0ed60a: Download complete
Step #2 - "Launch Sink Server JDBC": 0426ec0ed60a: Verifying Checksum
Step #2 - "Launch Sink Server JDBC": 0426ec0ed60a: Download complete
Step #4 - "Compile Bunsen and Pipeline": 38a980f2cc8a: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": de849f1cfbe6: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": a7203ca35e75: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": 3337662e6dc9: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": 81485058ab89: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": b548970362bb: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": dbd02ad382f5: Pulling fs layer
Step #4 - "Compile Bunsen and Pipeline": a7203ca35e75: Waiting
Step #4 - "Compile Bunsen and Pipeline": 3337662e6dc9: Waiting
Step #4 - "Compile Bunsen and Pipeline": 81485058ab89: Waiting
Step #4 - "Compile Bunsen and Pipeline": 38a980f2cc8a: Waiting
Step #4 - "Compile Bunsen and Pipeline": dbd02ad382f5: Waiting
Step #4 - "Compile Bunsen and Pipeline": de849f1cfbe6: Waiting
Step #0 - "Launch HAPI Source Server": 0426ec0ed60a: Pull complete
Step #1 - "Launch Sink Server Search": 0426ec0ed60a: Pull complete
Step #2 - "Launch Sink Server JDBC": 0426ec0ed60a: Pull complete
Step #7 - "Build E2E Image": 7608715873ec: Pulling fs layer
Step #7 - "Build E2E Image": 64a0b7566174: Pulling fs layer
Step #7 - "Build E2E Image": 414e25888ba9: Pulling fs layer
Step #7 - "Build E2E Image": fa1796814410: Pulling fs layer
Step #7 - "Build E2E Image": dc3ab4515b24: Pulling fs layer
Step #7 - "Build E2E Image": 495d1ae42cb9: Pulling fs layer
Step #7 - "Build E2E Image": 66b6d86e5b33: Pulling fs layer
Step #7 - "Build E2E Image": 90062ecd5dec: Pulling fs layer
Step #7 - "Build E2E Image": 64a0b7566174: Waiting
Step #7 - "Build E2E Image": 495d1ae42cb9: Waiting
Step #7 - "Build E2E Image": 414e25888ba9: Waiting
Step #7 - "Build E2E Image": 90062ecd5dec: Waiting
Step #7 - "Build E2E Image": 7608715873ec: Waiting
Step #7 - "Build E2E Image": 66b6d86e5b33: Waiting
Step #7 - "Build E2E Image": fa1796814410: Waiting
Step #1 - "Launch Sink Server Search": 9ac2a98ece5b: Verifying Checksum
Step #1 - "Launch Sink Server Search": 9ac2a98ece5b: Download complete
Step #2 - "Launch Sink Server JDBC": 9ac2a98ece5b: Verifying Checksum
Step #2 - "Launch Sink Server JDBC": 9ac2a98ece5b: Download complete
Step #0 - "Launch HAPI Source Server": 9ac2a98ece5b: Verifying Checksum
Step #0 - "Launch HAPI Source Server": 9ac2a98ece5b: Download complete
Step #0 - "Launch HAPI Source Server": 9ac2a98ece5b: Pull complete
Step #1 - "Launch Sink Server Search": 9ac2a98ece5b: Pull complete
Step #2 - "Launch Sink Server JDBC": 9ac2a98ece5b: Pull complete
Step #0 - "Launch HAPI Source Server": Digest: sha256:b60a020c0f68047b353a4a747f27f5e5ddb17116b7b018762edfb6f7a6439a82
Step #2 - "Launch Sink Server JDBC": Digest: sha256:b60a020c0f68047b353a4a747f27f5e5ddb17116b7b018762edfb6f7a6439a82
Step #2 - "Launch Sink Server JDBC": Status: Downloaded newer image for docker/compose:latest
Step #0 - "Launch HAPI Source Server": Status: Downloaded newer image for docker/compose:latest
Step #2 - "Launch Sink Server JDBC": docker.io/docker/compose:latest
Step #0 - "Launch HAPI Source Server": docker.io/docker/compose:latest
Step #1 - "Launch Sink Server Search": Digest: sha256:b60a020c0f68047b353a4a747f27f5e5ddb17116b7b018762edfb6f7a6439a82
Step #1 - "Launch Sink Server Search": Status: Image is up to date for docker/compose:latest
Step #1 - "Launch Sink Server Search": docker.io/docker/compose:latest
Step #5 - "Build Uploader Image": a803e7c4b030: Verifying Checksum
Step #5 - "Build Uploader Image": a803e7c4b030: Download complete
Step #5 - "Build Uploader Image": bf3336e84c8e: Verifying Checksum
Step #5 - "Build Uploader Image": bf3336e84c8e: Download complete
Step #5 - "Build Uploader Image": f9afc3cc0135: Verifying Checksum
Step #5 - "Build Uploader Image": f9afc3cc0135: Download complete
Step #5 - "Build Uploader Image": a803e7c4b030: Pull complete
Step #5 - "Build Uploader Image": bf3336e84c8e: Pull complete
Step #5 - "Build Uploader Image": 8973eb85275f: Verifying Checksum
Step #5 - "Build Uploader Image": 8973eb85275f: Download complete
Step #5 - "Build Uploader Image": 8973eb85275f: Pull complete
Step #5 - "Build Uploader Image": f9afc3cc0135: Pull complete
Step #4 - "Compile Bunsen and Pipeline": 38a980f2cc8a: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": 38a980f2cc8a: Download complete
Step #5 - "Build Uploader Image": 39312d8b4ab7: Verifying Checksum
Step #5 - "Build Uploader Image": 39312d8b4ab7: Download complete
Step #5 - "Build Uploader Image": 39312d8b4ab7: Pull complete
Step #5 - "Build Uploader Image": Digest: sha256:b53f496ca43e5af6994f8e316cf03af31050bf7944e0e4a308ad86c001cf028b
Step #5 - "Build Uploader Image": Status: Downloaded newer image for python:3.7-slim
Step #5 - "Build Uploader Image":  ---> a255ffcb469f
Step #5 - "Build Uploader Image": Step 2/10 : WORKDIR /uploader
Step #4 - "Compile Bunsen and Pipeline": de849f1cfbe6: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": de849f1cfbe6: Download complete
Step #4 - "Compile Bunsen and Pipeline": 38a980f2cc8a: Pull complete
Step #5 - "Build Uploader Image":  ---> Running in daf1ee2484af
Step #2 - "Launch Sink Server JDBC": Creating volume "sink-server-jdbc_hapi-data" with default driver
Step #5 - "Build Uploader Image": Removing intermediate container daf1ee2484af
Step #5 - "Build Uploader Image":  ---> d1da41c8cbe1
Step #5 - "Build Uploader Image": Step 3/10 : COPY  ./ ./
Step #1 - "Launch Sink Server Search": Creating volume "sink-server-search_hapi-data" with default driver
Step #5 - "Build Uploader Image":  ---> 5120021411ee
Step #5 - "Build Uploader Image": Step 4/10 : RUN pip install -r requirements.txt
Step #5 - "Build Uploader Image":  ---> Running in f76730a37184
Step #0 - "Launch HAPI Source Server": Creating network "hapi-compose_default" with the default driver
Step #4 - "Compile Bunsen and Pipeline": de849f1cfbe6: Pull complete
Step #0 - "Launch HAPI Source Server": Creating volume "hapi-compose_hapi-fhir-db" with default driver
Step #2 - "Launch Sink Server JDBC": Pulling sink-server (hapiproject/hapi:latest)...
Step #0 - "Launch HAPI Source Server": Creating volume "hapi-compose_hapi-server" with default driver
Step #4 - "Compile Bunsen and Pipeline": 81485058ab89: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": 81485058ab89: Download complete
Step #1 - "Launch Sink Server Search": Pulling sink-server (hapiproject/hapi:latest)...
Step #0 - "Launch HAPI Source Server": Pulling db (postgres:)...
Step #4 - "Compile Bunsen and Pipeline": 3337662e6dc9: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": 3337662e6dc9: Download complete
Step #4 - "Compile Bunsen and Pipeline": a7203ca35e75: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": a7203ca35e75: Download complete
Step #4 - "Compile Bunsen and Pipeline": b548970362bb: Verifying Checksum
Step #4 - "Compile Bunsen and Pipeline": b548970362bb: Download complete
Step #4 - "Compile Bunsen and Pipeline": dbd02ad382f5: Download complete
Step #5 - "Build Uploader Image": Collecting google-auth
Step #7 - "Build E2E Image": 7608715873ec: Verifying Checksum
Step #7 - "Build E2E Image": 7608715873ec: Download complete
Step #5 - "Build Uploader Image":   Downloading google_auth-2.36.0-py2.py3-none-any.whl (209 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 209.5/209.5 kB 6.7 MB/s eta 0:00:00
Step #5 - "Build Uploader Image": Collecting mock
Step #5 - "Build Uploader Image":   Downloading mock-5.1.0-py3-none-any.whl (30 kB)
Step #5 - "Build Uploader Image": Collecting requests
Step #5 - "Build Uploader Image":   Downloading requests-2.31.0-py3-none-any.whl (62 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 6.6 MB/s eta 0:00:00
Step #5 - "Build Uploader Image": Collecting rsa<5,>=3.1.4
Step #5 - "Build Uploader Image":   Downloading rsa-4.9-py3-none-any.whl (34 kB)
Step #5 - "Build Uploader Image": Collecting cachetools<6.0,>=2.0.0
Step #5 - "Build Uploader Image":   Downloading cachetools-5.5.0-py3-none-any.whl (9.5 kB)
Step #5 - "Build Uploader Image": Collecting pyasn1-modules>=0.2.1
Step #5 - "Build Uploader Image":   Downloading pyasn1_modules-0.3.0-py2.py3-none-any.whl (181 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 181.3/181.3 kB 14.7 MB/s eta 0:00:00
Step #4 - "Compile Bunsen and Pipeline": a7203ca35e75: Pull complete
Step #5 - "Build Uploader Image": Collecting idna<4,>=2.5
Step #5 - "Build Uploader Image":   Downloading idna-3.10-py3-none-any.whl (70 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 70.4/70.4 kB 7.3 MB/s eta 0:00:00
Step #7 - "Build E2E Image": 7608715873ec: Pull complete
Step #7 - "Build E2E Image": 64a0b7566174: Verifying Checksum
Step #7 - "Build E2E Image": 64a0b7566174: Download complete
Step #7 - "Build E2E Image": fa1796814410: Download complete
Step #5 - "Build Uploader Image": Collecting charset-normalizer<4,>=2
Step #5 - "Build Uploader Image":   Downloading charset_normalizer-3.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 138.3/138.3 kB 12.6 MB/s eta 0:00:00
Step #5 - "Build Uploader Image": Collecting certifi>=2017.4.17
Step #5 - "Build Uploader Image":   Downloading certifi-2024.8.30-py3-none-any.whl (167 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 167.3/167.3 kB 12.9 MB/s eta 0:00:00
Step #5 - "Build Uploader Image": Collecting urllib3<3,>=1.21.1
Step #5 - "Build Uploader Image":   Downloading urllib3-2.0.7-py3-none-any.whl (124 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 kB 13.8 MB/s eta 0:00:00
Step #7 - "Build E2E Image": 64a0b7566174: Pull complete
Step #5 - "Build Uploader Image": Collecting pyasn1<0.6.0,>=0.4.6
Step #5 - "Build Uploader Image":   Downloading pyasn1-0.5.1-py2.py3-none-any.whl (84 kB)
Step #5 - "Build Uploader Image":      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.9/84.9 kB 12.6 MB/s eta 0:00:00
Step #2 - "Launch Sink Server JDBC": latest: Pulling from hapiproject/hapi
Step #5 - "Build Uploader Image": Installing collected packages: urllib3, pyasn1, mock, idna, charset-normalizer, certifi, cachetools, rsa, requests, pyasn1-modules, google-auth
Step #7 - "Build E2E Image": 414e25888ba9: Verifying Checksum
Step #7 - "Build E2E Image": 414e25888ba9: Download complete
Step #1 - "Launch Sink Server Search": latest: Pulling from hapiproject/hapi
Step #0 - "Launch HAPI Source Server": latest: Pulling from library/postgres
Step #5 - "Build Uploader Image": Successfully installed cachetools-5.5.0 certifi-2024.8.30 charset-normalizer-3.4.0 google-auth-2.36.0 idna-3.10 mock-5.1.0 pyasn1-0.5.1 pyasn1-modules-0.3.0 requests-2.31.0 rsa-4.9 urllib3-2.0.7
Step #5 - "Build Uploader Image": �[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Step #7 - "Build E2E Image": 495d1ae42cb9: Verifying Checksum
Step #7 - "Build E2E Image": 495d1ae42cb9: Download complete
Step #7 - "Build E2E Image": 66b6d86e5b33: Download complete
Step #5 - "Build Uploader Image": �[0m�[91m
Step #5 - "Build Uploader Image": [notice] A new release of pip is available: 23.0.1 -> 24.0
Step #5 - "Build Uploader Image": [notice] To update, run: pip install --upgrade pip
Step #4 - "Compile Bunsen and Pipeline": 3337662e6dc9: Pull complete
Step #4 - "Compile Bunsen and Pipeline": 81485058ab89: Pull complete
Step #4 - "Compile Bunsen and Pipeline": b548970362bb: Pull complete
Step #7 - "Build E2E Image": dc3ab4515b24: Verifying Checksum
Step #7 - "Build E2E Image": dc3ab4515b24: Download complete
Step #7 - "Build E2E Image": 90062ecd5dec: Verifying Checksum
Step #7 - "Build E2E Image": 90062ecd5dec: Download complete
Step #4 - "Compile Bunsen and Pipeline": dbd02ad382f5: Pull complete
Step #7 - "Build E2E Image": 414e25888ba9: Pull complete
Step #4 - "Compile Bunsen and Pipeline": Digest: sha256:3a9c30b3af6278a8ae0007d3a3bf00fff80ec3ed7ae4eb9bfa1772853101549b
Step #4 - "Compile Bunsen and Pipeline": Status: Downloaded newer image for maven:3.8.5-openjdk-17
Step #4 - "Compile Bunsen and Pipeline": docker.io/library/maven:3.8.5-openjdk-17
Step #7 - "Build E2E Image": fa1796814410: Pull complete
Step #7 - "Build E2E Image": dc3ab4515b24: Pull complete
Step #7 - "Build E2E Image": 495d1ae42cb9: Pull complete
Step #7 - "Build E2E Image": 66b6d86e5b33: Pull complete
Step #7 - "Build E2E Image": 90062ecd5dec: Pull complete
Step #7 - "Build E2E Image": Digest: sha256:ad4b34f02e52164df83182a2a05074b5288d6e6bcc2dfa0ce3d6fa43ec8b557f
Step #7 - "Build E2E Image": Status: Downloaded newer image for maven:3.8.7-eclipse-temurin-17-focal
Step #7 - "Build E2E Image":  ---> 896b49b4d0b7
Step #7 - "Build E2E Image": Step 2/14 : RUN apt-get update && apt-get install -y jq  python3.8 python3-pip
Step #7 - "Build E2E Image":  ---> Running in fcc363c07d44
Step #5 - "Build Uploader Image": �[0mRemoving intermediate container f76730a37184
Step #5 - "Build Uploader Image":  ---> 128ef44d1f23
Step #5 - "Build Uploader Image": Step 5/10 : ENV INPUT_DIR="./test_files"
Step #5 - "Build Uploader Image":  ---> Running in 2c945d873104
Step #5 - "Build Uploader Image": Removing intermediate container 2c945d873104
Step #5 - "Build Uploader Image":  ---> ef70ed256805
Step #5 - "Build Uploader Image": Step 6/10 : ENV CORES=""
Step #7 - "Build E2E Image": Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [128 kB]
Step #5 - "Build Uploader Image":  ---> Running in bb3b256e0387
Step #7 - "Build E2E Image": Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
Step #5 - "Build Uploader Image": Removing intermediate container bb3b256e0387
Step #5 - "Build Uploader Image":  ---> 0a762ad40065
Step #5 - "Build Uploader Image": Step 7/10 : ENV CONVERT=""
Step #5 - "Build Uploader Image":  ---> Running in e251c03fa6c2
Step #5 - "Build Uploader Image": Removing intermediate container e251c03fa6c2
Step #5 - "Build Uploader Image":  ---> 954e863ca6f5
Step #5 - "Build Uploader Image": Step 8/10 : ENV SINK_TYPE="HAPI"
Step #5 - "Build Uploader Image":  ---> Running in 00f2bc9c6052
Step #5 - "Build Uploader Image": Removing intermediate container 00f2bc9c6052
Step #5 - "Build Uploader Image":  ---> 23db67e5a617
Step #5 - "Build Uploader Image": Step 9/10 : ENV FHIR_ENDPOINT="http://localhost:8098/fhir"
Step #5 - "Build Uploader Image":  ---> Running in 5ca48c26f35a
Step #7 - "Build E2E Image": Get:3 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [4,107 kB]
Step #5 - "Build Uploader Image": Removing intermediate container 5ca48c26f35a
Step #5 - "Build Uploader Image":  ---> 8dca51b8acf8
Step #5 - "Build Uploader Image": Step 10/10 : CMD cd /uploader; python main.py ${SINK_TYPE}     ${FHIR_ENDPOINT} --input_dir ${INPUT_DIR} ${CORES} ${CONVERT}
Step #7 - "Build E2E Image": Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [128 kB]
Step #5 - "Build Uploader Image":  ---> Running in c77a3aba0452
Step #5 - "Build Uploader Image": Removing intermediate container c77a3aba0452
Step #5 - "Build Uploader Image":  ---> f2f9532a4810
Step #5 - "Build Uploader Image": Successfully built f2f9532a4810
Step #7 - "Build E2E Image": Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [128 kB]
Step #5 - "Build Uploader Image": Successfully tagged us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/synthea-uploader:d78594a
Step #7 - "Build E2E Image": Get:6 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]
Step #7 - "Build E2E Image": Get:7 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]
Finished Step #5 - "Build Uploader Image"
Starting Step #6 - "Run Uploader Unit Tests"
Step #6 - "Run Uploader Unit Tests": Already have image: us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/synthea-uploader:d78594a
Step #7 - "Build E2E Image": Get:8 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [30.9 kB]
Step #7 - "Build E2E Image": Get:9 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [4,137 kB]
Step #7 - "Build E2E Image": Get:10 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [1,277 kB]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Error stacktraces are turned on.
Step #4 - "Compile Bunsen and Pipeline": [INFO] Scanning for projects...
Step #7 - "Build E2E Image": Get:11 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1,275 kB]
Step #7 - "Build E2E Image": Get:12 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]
Step #7 - "Build E2E Image": Get:13 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1,568 kB]
Step #7 - "Build E2E Image": Get:14 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [4,295 kB]
Step #7 - "Build E2E Image": Get:15 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [4,573 kB]
Step #7 - "Build E2E Image": Get:16 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [33.5 kB]
Step #7 - "Build E2E Image": Get:17 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [28.6 kB]
Step #7 - "Build E2E Image": Get:18 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [55.2 kB]
Step #6 - "Run Uploader Unit Tests": ...........WARNING:uploader.Uploader:No locations found in sink. Using Unknown Location.
Step #6 - "Run Uploader Unit Tests": ...
Step #6 - "Run Uploader Unit Tests": ----------------------------------------------------------------------
Step #6 - "Run Uploader Unit Tests": Ran 14 tests in 0.047s
Step #6 - "Run Uploader Unit Tests": 
Step #6 - "Run Uploader Unit Tests": OK
Finished Step #6 - "Run Uploader Unit Tests"
Step #7 - "Build E2E Image": Fetched 33.6 MB in 3s (11.0 MB/s)
Step #4 - "Compile Bunsen and Pipeline": [INFO] ------------------------------------------------------------------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] Reactor Build Order:
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] root                                                               [pom]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Bunsen Parent                                                      [pom]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Extension Structure Definitions                                    [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Bunsen Core                                                        [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Bunsen Core R4                                                     [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Bunsen Core Stu3                                                   [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] Bunsen Avro                                                        [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] FHIR Analytics                                                     [pom]
Step #4 - "Compile Bunsen and Pipeline": [INFO] common                                                             [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] batch                                                              [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] streaming                                                          [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] controller                                                         [jar]
Step #4 - "Compile Bunsen and Pipeline": [INFO] coverage                                                           [pom]
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] Using the MultiThreadedBuilder implementation with a thread count of 32
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] -------------------< com.google.fhir.analytics:root >-------------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] Building root 0.2.7-SNAPSHOT                                      [1/13]
Step #4 - "Compile Bunsen and Pipeline": [INFO] --------------------------------[ pom ]---------------------------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] --- jacoco-maven-plugin:0.8.12:prepare-agent (default) @ root ---
Step #4 - "Compile Bunsen and Pipeline": [INFO] argLine set to -javaagent:/root/.m2/repository/org/jacoco/org.jacoco.agent/0.8.12/org.jacoco.agent-0.8.12-runtime.jar=destfile=/workspace/target/jacoco.exec
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] --- maven-install-plugin:2.4:install (default-install) @ root ---
Step #7 - "Build E2E Image": Reading package lists...
Step #4 - "Compile Bunsen and Pipeline": [INFO] Installing /workspace/pom.xml to /root/.m2/repository/com/google/fhir/analytics/root/0.2.7-SNAPSHOT/root-0.2.7-SNAPSHOT.pom
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] ------------------< com.cerner.bunsen:bunsen-parent >-------------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] Building Bunsen Parent 0.5.14-SNAPSHOT                            [2/13]
Step #4 - "Compile Bunsen and Pipeline": [INFO] --------------------------------[ pom ]---------------------------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] 
Step #4 - "Compile Bunsen and Pipeline": [INFO] ----------------< com.google.fhir.analytics:pipelines >-----------------
Step #4 - "Compile Bunsen and Pipeline": [INFO] Building FHIR Analytics 0.2.7-SNAPSHOT                            [3/13]
Step #4 - "Compile Bunsen and Pipeline": [INFO] --------------------------------[ pom ]---------------------------------
Step #7 - "Build E2E Image": Reading package lists...
Step #7 - "Build E2E Image": Building dependency tree...
Step #7 - "Build E2E Image": Reading state information...
Step #7 - "Build E2E Image": The following additional packages will be installed:
Step #7 - "Build E2E Image":   build-essential cpp cpp-9 dirmngr dpkg-dev fakeroot file g++ g++-9 gcc
Step #7 - "Build E2E Image":   gcc-10-base gcc-9 gcc-9-base gnupg gnupg-l10n gnupg-utils gpg gpg-agent
Step #7 - "Build E2E Image":   gpg-wks-client gpg-wks-server gpgconf gpgsm libalgorithm-diff-perl
Step #7 - "Build E2E Image":   libalgorithm-diff-xs-perl libalgorithm-merge-perl libasan5 libassuan0
Step #7 - "Build E2E Image":   libatomic1 libc-dev-bin libc6 libc6-dev libcc1-0 libcrypt-dev libdpkg-perl
Step #7 - "Build E2E Image":   libexpat1 libexpat1-dev libfakeroot libfile-fcntllock-perl libgcc-9-dev
Step #7 - "Build E2E Image":   libgcc-s1 libgomp1 libisl22 libitm1 libjq1 libksba8 liblocale-gettext-perl
Step #7 - "Build E2E Image":   liblsan0 libmagic-mgc libmagic1 libmpc3 libmpdec2 libmpfr6 libnpth0 libonig5
Step #7 - "Build E2E Image":   libpython3-dev libpython3-stdlib libpython3.8 libpython3.8-dev
Step #7 - "Build E2E Image":   libpython3.8-minimal libpython3.8-stdlib libquadmath0 libreadline8
Step #7 - "Build E2E Image":   libstdc++-9-dev libstdc++6 libtsan0 libubsan1 linux-libc-dev make manpages
Step #7 - "Build E2E Image":   manpages-dev mime-support pinentry-curses python-pip-whl python3 python3-dev
Step #7 - 
...
[Logs truncated due to log size limitations. For full logs, see https://storage.cloud.google.com/cloud-build-gh-logs/log-1a7513b1-541c-43db-a413-854db3488d59.txt.]
...
.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 62
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-5" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-6] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 63
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-8] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 65
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-6] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 63
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-7] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 64
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-8] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 65
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.700 [Timer-7] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 64
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-6" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-8" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-7" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.701 [Timer-9] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 66
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.702 [Timer-9] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 66
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-9" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.702 [Timer-10] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 67
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:32.702 [Timer-10] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 67
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-10" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.352 [Timer-13] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 99
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.352 [Timer-13] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 99
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.352 [Timer-11] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 97
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.352 [Timer-11] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 97
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.352 [Timer-12] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 98
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.353 [Timer-12] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 98
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-13" Exception in thread "Timer-11" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-12" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.353 [Timer-14] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 100
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.353 [Timer-14] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 100
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-14" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.366 [Timer-18] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 104
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.366 [Timer-15] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 101
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.366 [Timer-18] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 104
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.366 [Timer-15] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 101
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-18" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.366 [Timer-16] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 102
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-15" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.367 [Timer-16] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 102
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-16" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.367 [Timer-17] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 103
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.367 [Timer-17] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 103
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-17" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.368 [Timer-19] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil$1.run:154 - Flush timed out for thread 105
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 20:30:34.368 [Timer-19] INFO  c.google.fhir.analytics.ParquetUtil com.google.fhir.analytics.ParquetUtil.flushAllWriters:303 - Flushing all Parquet writers for thread 105
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": Exception in thread "Timer-19" java.lang.NullPointerException: Cannot invoke "org.apache.parquet.column.ColumnWriteStore.getBufferedSize()" because "this.columnStore" is null
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.InternalParquetRecordWriter.getDataSize(InternalParquetRecordWriter.java:147)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at org.apache.parquet.hadoop.ParquetWriter.getDataSize(ParquetWriter.java:339)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$WriterWithCache.getDataSize(ParquetUtil.java:369)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushViewWriter(ParquetUtil.java:296)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil.flushAllWriters(ParquetUtil.java:305)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at com.google.fhir.analytics.ParquetUtil$1.run(ParquetUtil.java:155)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.mainLoop(Timer.java:566)
Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source": 	at java.base/java.util.TimerThread.run(Timer.java:516)
Finished Step #14 - "Run Batch Pipeline for BULK_EXPORT mode with HAPI source"
Starting Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source"
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": Already have image: us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/e2e-tests:d78594a
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": Checking if the Parquet-tools JAR exists...
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": Parquet-tools JAR exists in /workspace/e2e-tests/controller-spark
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: ---- STARTING BULK_EXPORT TEST ----
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total FHIR source test patients ---> 80
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total FHIR source test encounters ---> 4006
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total FHIR source test obs ---> 17279
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Counting number of patients, encounters and obs sinked to parquet files
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total patients synced to parquet ---> 79
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total encounters synced to parquet ---> 4006
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total obs synced to parquet ---> 17279
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Parquet Sink Test Non-Streaming mode
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total patient flat rows synced to parquet ---> 106
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total encounter flat rows synced to parquet ---> 4006
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: Total observation flat rows synced to parquet ---> 17279
Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source": E2E TEST: PARQUET SINK TEST FAILED USING BULK_EXPORT MODE
Finished Step #15 - "Run E2E Test for BULK_EXPORT mode with HAPI source"
ERROR
ERROR: build step 15 "us-docker.pkg.dev/cloud-build-fhir/fhir-analytics/e2e-tests:d78594a" failed: step exited with non-zero status: 1
Step #2 - "Launch Sink Server JDBC": �[1A�[2K
Creating sink-server-jdbc ... �[32mdone�[0m
�[1B
Step #1 - "Launch Sink Server Search": �[1A�[2K
Creating sink-server-search ... �[32mdone�[0m
�[1B
Step #0 - "Launch HAPI Source Server": �[2A�[2K
Creating hapi-server  ... �[32mdone�[0m
�[2B�[1A�[2K
Creating hapi-fhir-db ... �[32mdone�[0m
�[1B
Step #19 - "Launch HAPI FHIR Sink Server Controller": �[1A�[2K
Creating sink-server-controller ... �[32mdone�[0m
�[1B
Step #20 - "Bring up controller and Spark containers": �[1A�[2K
Creating pipeline-controller ... �[32mdone�[0m
�[1B

Build Log: https://storage.cloud.google.com/cloud-build-gh-logs/log-1a7513b1-541c-43db-a413-854db3488d59.txt