You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I recently generated a bitfile for my design using HLS4ML and opened the block design in Xilinx Vivado. I noticed that all IPs are connected correctly, except for my Neural Network IP (myproject_axi_0).
Here are the steps I've taken:
Generated the bitfile successfully.
Opened the block design in Vivado
project_1.xpr
Run connection automation did not pop up, to automate myproject_axi_0.
Details:
Vivado Version: 2020.1
HLS4ML Version: 0.8.1
***** C/RTL SYNTHESIS *****
INFO: [SCHED 204-61] Option 'relax_ii_for_timing' is enabled, will increase II to preserve clock frequency constraints.
INFO: [HLS 200-10] Analyzing design file 'firmware/myproject.cpp' ...
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:43:71
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:43:75
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:47:74
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:47:79
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:55:67
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:55:71
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:59:74
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:59:79
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:67:67
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:67:71
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:69:76
WARNING: [HLS 214-113] Either use an argument of the function or declare the variable inside the dataflow loop body: firmware/myproject.cpp:69:81
WARNING: [HLS 200-471] Dataflow form checks found 12 issue(s) in file firmware/myproject.cpp
INFO: [HLS 200-10] Analyzing design file 'firmware/myproject_axi.cpp' ...
WARNING: [HLS 214-114] Since the only kind of statements allowed in a canonical dataflow region are variable declarations and function calls, the compiler may not be able to correctly handle the region: firmware/myproject_axi.cpp:17:2
WARNING: [HLS 214-114] Since the only kind of statements allowed in a canonical dataflow region are variable declarations and function calls, the compiler may not be able to correctly handle the region: firmware/myproject_axi.cpp:29:5
WARNING: [HLS 200-471] Dataflow form checks found 2 issue(s) in file firmware/myproject_axi.cpp
WARNING: [XFORM 203-631] Renaming function 'nnet::relu<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, nnet::array<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, 32u>, relu_config4>' to 'relu<array<ap_fixed,32u>,array<ap_ufixed<4,0,4,0,0>,32u>,relu_config4>' (firmware/nnet_utils/nnet_activation_stream.h:41:31)
WARNING: [XFORM 203-631] Renaming function 'nnet::relu<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 16u>, nnet::array<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, 16u>, relu_config7>' to 'relu<array<ap_fixed,16u>,array<ap_ufixed<4,0,4,0,0>,16u>,relu_config7>' (firmware/nnet_utils/nnet_activation_stream.h:41:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::normalize<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, config10>' to 'normalize<array<ap_fixed,32u>,array<ap_fixed<16,6,5,3,0>,32u>,config10>' (firmware/nnet_utils/nnet_batchnorm_stream.h:22:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::normalize<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 1u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 1u>, config12>' to 'normalize<array<ap_fixed,1u>,array<ap_fixed<16,6,5,3,0>,1u>,config12>' (firmware/nnet_utils/nnet_batchnorm_stream.h:22:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::normalize<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 16u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 16u>, config11>' to 'normalize<array<ap_fixed,16u>,array<ap_fixed<16,6,5,3,0>,16u>,config11>' (firmware/nnet_utils/nnet_batchnorm_stream.h:22:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense_wrapper<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, config8>' to 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config8>' (firmware/nnet_utils/nnet_dense_stream.h:13)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense_wrapper<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, config5>' to 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config5>' (firmware/nnet_utils/nnet_dense_stream.h:13)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense_wrapper<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, config2>' to 'dense_wrapper<ap_fixed<16, 6, 5, 3, 0>, ap_fixed<16, 6, 5, 3, 0>, config2>' (firmware/nnet_utils/nnet_dense_stream.h:13)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense<nnet::array<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, 32u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 16u>, config5>' to 'dense<array<ap_ufixed,32u>,array<ap_fixed<16,6,5,3,0>,16u>,config5>' (firmware/nnet_utils/nnet_dense_stream.h:36:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense<nnet::array<ap_ufixed<4, 0, (ap_q_mode)4, (ap_o_mode)0, 0>, 16u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 1u>, config8>' to 'dense<array<ap_ufixed,16u>,array<ap_fixed<16,6,5,3,0>,1u>,config8>' (firmware/nnet_utils/nnet_dense_stream.h:36:1)
WARNING: [XFORM 203-631] Renaming function 'nnet::dense<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 50u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, config2>' to 'dense<array<ap_fixed,50u>,array<ap_fixed<16,6,5,3,0>,32u>,config2>' (firmware/nnet_utils/nnet_dense_stream.h:36:1)
INFO: [HLS 200-111] Finished Architecture Synthesis Time (s): cpu = 00:01:14 ; elapsed = 00:01:18 . Memory (MB): peak = 3065.238 ; gain = 2632.613 ; free physical = 2269 ; free virtual = 7321
INFO: [HLS 200-10] Starting hardware synthesis ...
INFO: [HLS 200-10] Synthesizing 'myproject_axi' ...
WARNING: [SYN 201-103] Legalizing function name 'dense_wrapper<ap_fixed<16, 6, 5, 3, 0>, ap_fixed<16, 6, 5, 3, 0>, config2>' to 'dense_wrapper_ap_fixed_16_6_5_3_0_ap_fixed_16_6_5_3_0_config2_s'.
WARNING: [SYN 201-103] Legalizing function name 'dense<array<ap_fixed,50u>,array<ap_fixed<16,6,5,3,0>,32u>,config2>' to 'dense_array_ap_fixed_50u_array_ap_fixed_16_6_5_3_0_32u_config2_s'.
WARNING: [SYN 201-103] Legalizing function name 'normalize<array<ap_fixed,32u>,array<ap_fixed<16,6,5,3,0>,32u>,config10>' to 'normalize_array_ap_fixed_32u_array_ap_fixed_16_6_5_3_0_32u_config10_s'.
WARNING: [SYN 201-103] Legalizing function name 'relu<array<ap_fixed,32u>,array<ap_ufixed<4,0,4,0,0>,32u>,relu_config4>' to 'relu_array_ap_fixed_32u_array_ap_ufixed_4_0_4_0_0_32u_relu_config4_s'.
WARNING: [SYN 201-103] Legalizing function name 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config5>' to 'dense_wrapper_ap_ufixed_4_0_4_0_0_ap_fixed_16_6_5_3_0_config5_s'.
WARNING: [SYN 201-103] Legalizing function name 'dense<array<ap_ufixed,32u>,array<ap_fixed<16,6,5,3,0>,16u>,config5>' to 'dense_array_ap_ufixed_32u_array_ap_fixed_16_6_5_3_0_16u_config5_s'.
WARNING: [SYN 201-103] Legalizing function name 'normalize<array<ap_fixed,16u>,array<ap_fixed<16,6,5,3,0>,16u>,config11>' to 'normalize_array_ap_fixed_16u_array_ap_fixed_16_6_5_3_0_16u_config11_s'.
WARNING: [SYN 201-103] Legalizing function name 'relu<array<ap_fixed,16u>,array<ap_ufixed<4,0,4,0,0>,16u>,relu_config7>' to 'relu_array_ap_fixed_16u_array_ap_ufixed_4_0_4_0_0_16u_relu_config7_s'.
WARNING: [SYN 201-103] Legalizing function name 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config8>' to 'dense_wrapper_ap_ufixed_4_0_4_0_0_ap_fixed_16_6_5_3_0_config8_s'.
WARNING: [SYN 201-103] Legalizing function name 'dense<array<ap_ufixed,16u>,array<ap_fixed<16,6,5,3,0>,1u>,config8>' to 'dense_array_ap_ufixed_16u_array_ap_fixed_16_6_5_3_0_1u_config8_s'.
WARNING: [SYN 201-103] Legalizing function name 'normalize<array<ap_fixed,1u>,array<ap_fixed<16,6,5,3,0>,1u>,config12>' to 'normalize_array_ap_fixed_1u_array_ap_fixed_16_6_5_3_0_1u_config12_s'.
WARNING: [SYN 201-103] Legalizing function name 'Block_myproject_axi_.exit2622_proc' to 'Block_myproject_axi_exit2622_proc'.
WARNING: [SYN 201-223] Checking resource limit in 'dense_wrapper<ap_fixed<16, 6, 5, 3, 0>, ap_fixed<16, 6, 5, 3, 0>, config2>': cannot find any operation of 'mul'.
WARNING: [SYN 201-223] Checking resource limit in 'normalize<array<ap_fixed,32u>,array<ap_fixed<16,6,5,3,0>,32u>,config10>': cannot find any operation of 'mul'.
WARNING: [SYN 201-223] Checking resource limit in 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config5>': cannot find any operation of 'mul'.
WARNING: [SYN 201-223] Checking resource limit in 'normalize<array<ap_fixed,16u>,array<ap_fixed<16,6,5,3,0>,16u>,config11>': cannot find any operation of 'mul'.
WARNING: [SYN 201-223] Checking resource limit in 'dense_wrapper<ap_ufixed<4, 0, 4, 0, 0>, ap_fixed<16, 6, 5, 3, 0>, config8>': cannot find any operation of 'mul'.
WARNING: [SYN 201-223] Checking resource limit in 'normalize<array<ap_fixed,1u>,array<ap_fixed<16,6,5,3,0>,1u>,config12>': cannot find any operation of 'mul'.
here is my code
# Remove outliers
dataset = dataset[(dataset['Consumption'] >= lower_bound) & (dataset['Consumption'] <= upper_bound)]
# Scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(dataset[['Consumption']]) # Keep it as a 2D array
# Splitting the data into training, testing, and validation sets
training_size = int(len(scaled_data) * 0.80)
train_data = scaled_data[:training_size]
test_data = scaled_data[training_size:]
val_size = int(len(train_data) * 0.20)
train_data, val_data = train_data[:-val_size], train_data[-val_size:]
# Function to create dataset for Dense model
def create_dataset(dataset, time_step=1):
dataX, dataY = [], []
for i in range(len(dataset) - time_step):
a = dataset[i:(i + time_step), 0]
dataX.append(a)
dataY.append(dataset[i + time_step, 0])
return np.array(dataX), np.array(dataY)
# Creating datasets
time_step = 50
X_train, y_train = create_dataset(train_data, time_step)
X_test, ytest = create_dataset(test_data, time_step)
X_val, yval = create_dataset(val_data, time_step)
# Reshape for Dense input
X_train = X_train.reshape(X_train.shape[0], -1) # Flatten for Dense input
X_test = X_test.reshape(X_test.shape[0], -1)
X_val = X_val.reshape(X_val.shape[0], -1)
# Create directory for saving files if it doesn't exist
os.makedirs('package', exist_ok=True)
# Save the test data
np.save('package/X_test.npy', X_test) # Save X_test array
np.save('package/y_test.npy', ytest) # Save y_test array
# Build the model
model = Sequential([
# Input layer
QDense(
32, # Reduced neurons
input_shape=(X_train.shape[1],),
kernel_quantizer=quantized_bits(4, 0), # Simplified quantization
bias_quantizer=quantized_bits(4, 0),
kernel_initializer='lecun_uniform',
kernel_regularizer='l2',
name='dense_fc1'
),
QActivation(activation=quantized_relu(4), name='relu1'),
Dropout(0.2),
# Hidden layer
QDense(
16, # Further reduced neurons
kernel_quantizer=quantized_bits(4, 0), # Simplified quantization
bias_quantizer=quantized_bits(4, 0),
kernel_initializer='lecun_uniform',
name='dense_fc2'
),
QActivation(activation=quantized_relu(4), name='relu2'),
# Output layer
QDense(
1,
kernel_quantizer=quantized_bits(4, 0), # Simplified quantization
bias_quantizer=quantized_bits(4, 0),
kernel_initializer='lecun_uniform',
name='output'
)
])
pruning_params = {
"pruning_schedule": pruning_schedule.ConstantSparsity(0.80, begin_step=2000, frequency=100)
}
# Apply pruning to the model
model = prune.prune_low_magnitude(model, **pruning_params)
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Define EarlyStopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True)
# Set up the pruning callback to update pruning steps during training
callbacks = [
pruning_callbacks.UpdatePruningStep(),
early_stopping
]
# Fit the model with early stopping and pruning updates
history = model.fit(X_train, y_train, validation_data=(X_val, yval),
verbose=1, epochs=10, batch_size=20,
callbacks=callbacks)
# Strip the pruning wrappers from the model after training
model = strip_pruning(model)
# Create an HLS config from the Keras model
config = hls4ml.utils.config_from_keras_model(model, granularity='name')
# Set the ReuseFactor for supported layers
for layer in config['LayerName']:
if 'ReuseFactor' in config['LayerName'][layer]: # Only update if ReuseFactor exists
config['LayerName'][layer]['ReuseFactor'] = 64
# Log the configuration for verification
pprint.pprint(config)
# Convert the Keras model to an HLS model
hls_model = hls4ml.converters.convert_from_keras_model(
model, hls_config=config, io_type='io_stream',
output_dir='finapynq_hls4ml_prj3', part='xc7z020clg400-1',
backend='VivadoAccelerator', board='pynq-z2'
)
# Compile the HLS model
hls_model.compile()
hls_model.build(csim=False, export=True, bitfile=True)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello HLS4ML Community,
I recently generated a bitfile for my design using HLS4ML and opened the block design in Xilinx Vivado. I noticed that all IPs are connected correctly, except for my Neural Network IP (myproject_axi_0).
Here are the steps I've taken:
Generated the bitfile successfully.
Opened the block design in Vivado
Run connection automation did not pop up, to automate myproject_axi_0.
Details:
Vivado Version: 2020.1
HLS4ML Version: 0.8.1
here is my code
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions