-
Notifications
You must be signed in to change notification settings - Fork 38
Open
Labels
feature requestNew feature or requestNew feature or request
Description
If use_bias is set to True with ComplexConv2D (i.e. the default) the required time to train will be 2-3 times longer than if no bias is used. With keras real-valued Conv2D the difference due to bias is basically none.
Is that to be expected?
I observed that cpu-usage is a bit higher when enabling bias, but allocated gpu-memory is the same for both cases.
See below for a simple example.
import numpy as np
import tensorflow as tf
import cvnn.layers
n_samples = 10000
data_shape = (n_samples, 128, 256, 2)
input_shape = data_shape[1:]
data = np.csingle(np.random.rand(*data_shape) + 1j*np.random.rand(*data_shape))
labels = np.float32(np.random.rand(n_samples))
use_bias = True # True increases train time by a factor 2-3
model = tf.keras.models.Sequential([
cvnn.layers.ComplexInput(input_shape=input_shape, dtype=np.complex64),
cvnn.layers.ComplexConv2D(8, (5, 5), activation='cart_relu', use_bias=use_bias),
cvnn.layers.ComplexFlatten(),
cvnn.layers.ComplexDense(1, activation='convert_to_real_with_abs')
])
print("Total size: {} MB".format((data.nbytes+labels.nbytes)/1_000_000))
model.compile(optimizer=tf.optimizers.Adam(learning_rate=1e-04), loss='mean_squared_error', metrics=[tf.keras.metrics.RootMeanSquaredError()])
model.summary()
model.fit(data, labels, epochs=5, verbose=2)
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or request