🥊 Benchmark CPU vs. GPU #4915
Replies: 5 comments 2 replies
-
If we can add API access to Azure OpenAI private instance, do we want to add that as well or is moving data cross-cloud basically disqualifies it? |
Beta Was this translation helpful? Give feedback.
-
Using the following code (shoutout Bing for generating this 🤝) running on AP's GPU enabled VS Code: import tensorflow as tf
import time
# Function to create a more complex model
def create_complex_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
return model
# Function to benchmark performance
def benchmark(device):
with tf.device(device):
model = create_complex_model()
(x_train, y_train), _ = tf.keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 255
start_time = time.time()
model.fit(x_train, y_train, epochs=10, batch_size=64, verbose=0)
end_time = time.time()
return end_time - start_time
# Benchmark on GPU
print("Benchmarking GPU")
gpu_time = benchmark('/GPU:0')
print(f"GPU training time: {gpu_time:.2f} seconds")
# Benchmark on CPU
print("Benchmarking CPU")
cpu_time = benchmark('/CPU:0')
print(f"CPU training time: {cpu_time:.2f} seconds") GPU training time: 38.11 seconds Benchmarking on G5: |
Beta Was this translation helpful? Give feedback.
-
@jacobwoffenden was this VS code CPU v VS code GPU ? |
Beta Was this translation helpful? Give feedback.
-
Thanks for confirming Jacob. Julia asked me "can have a look at why bedrock is being slow" well it is not as this is just a comparison of a CPU v GPU node, And could be relevant in deciding where to place workloads. I will see if i can find some way of training a LLM on the GPU and Bedrock for a comparison. It would also be a good idea to run the Tensorflow model on Sagemaker to see the comparison there. |
Beta Was this translation helpful? Give feedback.
-
Idea Description
Create something to benchmark our compute offerings
Plus there is some chatter about GPU enabled workloads on Airflow (#4907)
Why Should We Do This
It would be useful to have definitive figures from tests run against all 3 capabilities we provide to Airflow customers
Definition of Done
Potentially a suite of different benchmarks we can run to produce figures
What skills do we need?
Notes
https://techcommunity.microsoft.com/t5/azure-high-performance-computing/exploring-cpu-vs-gpu-speed-in-ai-training-a-demonstration-with/ba-p/4014242
Beta Was this translation helpful? Give feedback.
All reactions