Skip to content

Commit

Permalink
Merge pull request #1 from ctarver/conjugate_branch
Browse files Browse the repository at this point in the history
Conjugate branch
  • Loading branch information
ctarver authored Apr 22, 2019
2 parents 8034f87 + bd1cc5a commit d2cdb41
Show file tree
Hide file tree
Showing 4 changed files with 69 additions and 10 deletions.
35 changes: 35 additions & 0 deletions ILA_DPD.m
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@
nIterations % Number of iterations used in the ILA learning
block_size % Block size used for each iteration in the learning
coeffs % DPD coefficients
use_conj % Use a conjugate branch as well
use_dc_term % use a dc term
end

methods
Expand All @@ -47,6 +49,8 @@
params.memory_depth = 3;
params.nIterations = 3;
params.block_size = 50000;
params.use_conj = 0;
params.use_dc_term = 0;
end

if mod(params.order, 2) == 0
Expand All @@ -58,6 +62,9 @@
obj.nIterations = params.nIterations;
obj.block_size = params.block_size;

obj.use_conj = params.use_conj;
obj.use_dc_term = params.use_dc_term;

% Start DPD coeffs being completely linear (no effect)
obj.coeffs = zeros(obj.convert_order_to_number_of_coeffs, obj.memory_depth);
obj.coeffs(1) = 1;
Expand Down Expand Up @@ -137,6 +144,7 @@ function perform_learning(obj, x, pa)
number_of_basis_vectors = obj.memory_depth * obj.convert_order_to_number_of_coeffs;
X = zeros(length(x), number_of_basis_vectors);

% Main branch
count = 1;
for i = 1:2:obj.order
branch = x .* abs(x).^(i-1);
Expand All @@ -147,6 +155,24 @@ function perform_learning(obj, x, pa)
count = count + 1;
end
end

if obj.use_conj
% Conjugate branch
for i = 1:2:obj.order
branch = conj(x) .* abs(x).^(i-1);
for j = 1:obj.memory_depth
delayed_version = zeros(size(branch));
delayed_version(j:end) = branch(1:end - j + 1);
X(:, count) = delayed_version;
count = count + 1;
end
end
end

% DC
if obj.use_dc_term
X(:, count) = 1;
end
end


Expand All @@ -158,7 +184,16 @@ function perform_learning(obj, x, pa)
if nargin == 1
order = obj.order;
end

number_of_coeffs = (order + 1) / 2;

if obj.use_conj
number_of_coeffs = 2 * number_of_coeffs;
end

if obj.use_dc_term
number_of_coeffs = number_of_coeffs + 1;
end
end


Expand Down
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,3 +44,12 @@ The DPD is a nonlinear function that approximates an inverse of the PA's nonline
One strong advantage of this model is that the FIR filter and hence any coefficients of the model are after their corresponding nonlinearities. This means the output of the model can be expressed as a linear system, *y = X b*. Here, X is a matrix where each column is a different nonlinear branch of the form *x|x|^{i-1}* where *i* is the order which can only be odd, and *x* is the input signal. If we have input/output samples, this can be a linear least-squares regression problem since the model coefficients are linear with regards to the input. Here we want to *minimize_b || y - X b ||* for some experimental *x* and *y.* This minimizes the sum of squared residuals and gives us the best fit.

The DPD can't easily be modeled in this method directly because we don't know the desired predistorter output. We just know the desired PA output, actual PA output, and the original input signal. The indirect learning architecture allows us to circumvent this. When fully converged, the PA output should be linear, and so the input to the pre and post distorters would be equivalent, their outputs would be the same, and the error signal would be zero. When training, we use the postdistorter. We want the output of the postdistorter to be equal to the output of the predistorter so that there is no error. We can run this to train and get some PA output signal. Then for the postdistorter, we have input sample (the PA ouput) and a desired postdistorter output (the predistorter output). We can start with some DPD coefficients (such as a pure linear DPD) then perform a LS fit to find the best coefficients to fit the postdistorter. We copy this to the predistorter and repeat for a few iterations.

## References:
For the conjugate branch:
```
L. Anttila, P. Handel and M. Valkama, "Joint Mitigation of Power Amplifier and I/Q Modulator Impairments in Broadband Direct-Conversion Transmitters," in IEEE Transactions on Microwave Theory and Techniques, vol. 58, no. 4, pp. 730-739, April 2010.
doi: 10.1109/TMTT.2010.2041579
keywords: {modulators;power amplifiers;radio transmitters;telecommunication channels;broadband direct-conversion transmitters;frequency-dependent power amplifier;I/Q modulator impairments;direct-conversion radio transmitters;extended parallel Hammerstein structure;parameter estimation stage;indirect learning architecture;adjacent channel power ratio;Broadband amplifiers;Power amplifiers;Radio transmitters;Digital modulation;Predistortion;Local oscillators;Wideband;Nonlinear distortion;Radiofrequency amplifiers;Frequency estimation;Digital predistortion (PD);direct-conversion radio;in-phase and quadrature (I/Q) imbalance;I/Q modulator;local oscillator (LO) leakage;mirror-frequency interference (MFI);power amplifier (PA);spectral regrowth},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5431085&isnumber=5446455
```
28 changes: 20 additions & 8 deletions example.m
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
addpath(genpath('WARPLab-Matlab-Wrapper'))
addpath(genpath('Power-Amplifier-Model'))

rms_input = 0.20;
rms_input = 0.50;

% Setup the PA simulator or TX board
PA_board = 'WARP'; % either 'WARP', 'webRF', or 'none'
PA_board = 'webRF'; % either 'WARP', 'webRF', or 'none'
switch PA_board
case 'WARP'
warp_params.nBoards = 1; % Number of boards
Expand All @@ -20,12 +20,13 @@
board = PowerAmplifier(7, 4);
Fs = 40e6; % WARP board sampling rate.
case 'webRF'
board = webRF();
dbm_power = -26;
board = webRF(dbm_power);
Fs = 200e6; % webRF sampling rate.
end

% Setup OFDM
ofdm_params.nSubcarriers = 300;
ofdm_params.nSubcarriers = 600;
ofdm_params.subcarrier_spacing = 15e3; % 15kHz subcarrier spacing
ofdm_params.constellation = 'QPSK';
ofdm_params.cp_length = 144; % Number of samples in cyclic prefix.
Expand All @@ -38,21 +39,32 @@
tx_data = normalize_for_pa(upsampled_tx_data, rms_input);

% Setup DPD
dpd_params.order = 7;
dpd_params.memory_depth = 3;
dpd_params.nIterations = 3;
dpd_params.order = 11;
dpd_params.memory_depth = 4;
dpd_params.nIterations = 2;
dpd_params.block_size = 50000;

dpd_params.use_conj = 1;
dpd_params.use_dc_term = 1;
conj_dpd = ILA_DPD(dpd_params);

dpd_params.use_conj = 0;
dpd_params.use_dc_term = 0;
dpd = ILA_DPD(dpd_params);

%% Run Expierement
w_out_dpd = board.transmit(tx_data);
conj_dpd.perform_learning(tx_data, board);
w_conj_dpd = board.transmit(conj_dpd.predistort(tx_data));

dpd.perform_learning(tx_data, board);
w_dpd = board.transmit(dpd.predistort(tx_data));

%% Plot
plot_results('psd', 'Original TX signal', tx_data, 40e6)
plot_results('psd', 'No DPD', w_out_dpd, 40e6)
plot_results('psd', 'With DPD', w_dpd, 40e6)
plot_results('psd', 'With Normal DPD', w_dpd, 40e6)
plot_results('psd', 'With Conjug DPD', w_conj_dpd, 40e6)

%% Some helper functions
function out = up_sample(in, Fs, sampling_rate)
Expand Down
7 changes: 5 additions & 2 deletions webRF.m
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,12 @@
end

methods
function obj = webRF()
function obj = webRF(dbm_power)
%webRF Construct an instance of this class
obj.RMSin = -21;
if nargin == 0
dbm_power = -24;
end
obj.RMSin = dbm_power;
obj.synchronization.sub_sample = 1;
end

Expand Down

0 comments on commit d2cdb41

Please sign in to comment.