Gradient Descent for DoA

The \(\text{5G}^\text{th}\) generation of wireless networks are equipped with increased number of antennas and radio frequency RF chains at the transmitter and receiver. This allows the 5G systems to estimate the angles with improved resolution. In release-15, 3GPP defined standards to enable methods for estimating UE location based on angle of departure and angle of arrival measurements for 5G-NR systems. The nomenclature of the methods is detailed below.

Table 42 Table-1: Angle od Departure and Arrival based Positioning in 5G Networks

Generation

Method

Measurement

Optimization methods

5G

DL-AoD

Beam-ID and RSRP

Newton-Raphson, Gradient-Descent

5G

UL-AoA

AOA, RToA, Beam-ID, and RSRP

Newton-Raphson, Gradient-Descent

Its not possible to find a close form estimator for 2D-DoAs. Hence, DoA based positioning methods rely on iterative optimization methods. One such optimization method is Gradient Descent (GD) which has low complexity iteration in comparison to Newton Raphson (NR) method but needs more iterations to converge to local/global optima. The comparison between the two methods is shown below:

Table 43 Table-2: Performance comparison between Gradient Descent and Newton Raphson optimization algorithm.

Method

Per iteration complexity

Convergence

Utility

Gradient Descent

\(\text{N}\)

Slow

Low power

Newton Raphson

\(\text{N}^2\)

Fast

Low latency

where N denotes the number of measurements used for positioning. The implementation of both these methods provided in 5G Toolkit is inspired by [3gppDoA].


Code example

# shape of refLocations: Nref x 3
# shape of xoA: Nref x 2
numEpoches    = 1
tolerance     = 10**-5
numIterations = 10000
stepsize      = 1

posEstimator  = GradientDescentDoA(numEpochs = numEpoches, numIterationPerEpoch = numIterations,
                                    stepsize = stepsize, tolerance = tolerance, isd = 100)
positionEstimate = posEstimator(xoA, refLocations)

The input output interface for usage of Gradient Descent algorithm is provided below.

class toolkit5G.Positioning.GradientDescentDoA(numIterationPerEpoch=10000, numEpochs=10, tolerance=1e-06, stepsize=0.1, isd=100)[source]

This modules uses direction of arrival measurements \(\{(\phi_i, \theta_i)\}_{i=0}^{i=\text{N}_\text{ref}-1}\) to estimate the position of a node based on gradient descent optimization method.

Parameters:
  • numIterationPerEpoch (int) – Number of iteration for gradient descent optimizations. Default value is 10000.

  • numEpochs (int) – Number of epoches used in gradient descent optimizations. Default value is 10.

  • tolerance (float) – Error tolerance values. The optimization stops when the optimization error reduces below tolerance value.

  • stepsize (float) – Optimization step size. The Newton Raphson algorithms iterates using this step-size to converge towards the global solutions.

  • isd (float) – Defines the intersite distance (m). Its equal to :math:`2 times ` cell-size.

Input:
  • refPosition ((\(\text{N}_\text{ref}\), 3), np.number) – Reference locations with respect to whom ToAs are estimated.

  • xoA ((\(\text{N}_\text{ref}\),2), np.number) – direction of arrival estimates of the target node with respect to \(\text{N}_\text{ref}\) reference nodes. xoA[:,0] stores the azimuth angle (\(\phi\)) and xoA[:,1] stores elevation angle (\(\theta\)).

Output:

(3,), np.number – Position estimate.

Note

The large value of numIterationPerEpoch improves the optimization performance but increases the complexity of the method.

Note

The large value of numEpochs helps reduce the odds of getting stuck in the local optima but increases the complexity of the method.

Note

The lower value of tolerance improves the optimization performance but increases the complexity of the method. The method iterative longer to reduces the optimization error below the tolerance limit.

Note

The selection of step-size mu plays crucial role in performance of the method. Small step-size results in good performance but requires large number of iteration numIterationPerEpoch to converge. Large step-size improves the convergence rate but are suceptible to local minimas.

Raises:
  • ValueError – [Error-GradientDescentDoA]: ‘refPosition’ and ‘measurements’ must be an numpy array!

  • ValueError – [Error-GradientDescentDoA]: ‘refPosition’ must be an 2D numpy array!

  • ValueError – [Error-GradientDescentDoA]: ‘refPosition’ must be an numpy array of size (nRefnode x 3). refPosition.shape[1] is not equal to 3!

  • ValueError – [Error-GradientDescentDoA]: ‘refPosition’ must be an numpy array of size (nRefnode x 3). refPosition.shape[0] should be more than 3 for trilateration!

  • ValueError – [Error-GradientDescentDoA]: ‘number of refPositions’ must be consistent with number of measurements! refPosition.shape[0] should be equal to tdoa.size+1

  • ValueError – [Error-GradientDescentDoA]: ‘numIterationPerEpoch’ should be a scalar integer!

  • ValueError – [Error-GradientDescentDoA]: ‘numEpochs’ should be a scalar integer!

  • ValueError – [Error-GradientDescentDoA]: ‘tolerance’ should be a scalar number!

  • ValueError – [Error-GradientDescentDoA]: ‘stepsize’ should be a scalar number!

  • ValueError – [Error-GradientDescentDoA]: ‘isd’ should be a scalar number!

  • ValueError – [Error-GradientDescentDoA]: ‘refPosition’ and ‘measurements’ must be an numpy array!


Newton Raphson for DoA

The details of input output interface for usage is detailed below.

Note

The documentation and API for NewtonRaphsonDoA are not active yet. It will be provided in patch 23a.0.11 by July 28, 2023.

class toolkit5G.Positioning.NewtonRaphsonDoA(numEpochs=1, numIterationPerEpoch=100, stepsize=0.1, tolerance=0.01, isd=100)[source]

The documentation will be updated by 28 July.

References:
[3gppDoA]
  1. Wang, Z. Shi, Y. Yu, S. Huang and L. Chen, “Enabling Angle-based Positioning to 3GPP NR Systems,” 2019 16th Workshop on Positioning, Navigation and Communications (WPNC), Bremen, Germany, 2019, pp. 1-7, doi: 10.1109/WPNC47567.2019.8970182.