MUSIC (Multiple Signal Classification) 算法是一種基于子空間方法的高分辨率譜估計技術(shù),廣泛應(yīng)用于信號處理領(lǐng)域,特別是在方向估計(如陣列信號處理中的到達角度(DOA)估計)中非常有效。它是由Schmidt于1986年提出的,用于估計從多個方向接收到的窄帶信號的方向。MUSIC算法能夠提供比傳統(tǒng)波束形成技術(shù)更高的分辨率,特別適合于處理信號源個數(shù)少于接收傳感器個數(shù)的情況。
MUSIC算法的核心思想是利用接收信號的協(xié)方差矩陣,將信號空間和噪聲空間分離,進而利用它們的正交性質(zhì)來估計信號的方向。
MUSIC算法的基本原理和步驟
- 數(shù)據(jù)模型
MUSIC算法考慮的是一個線性陣列接收到的來自多個遠場點源的窄帶信號。這些信號被假設(shè)為相互獨立的,并且每個信號的波達方向(DOA,Direction of Arrival)是不同的。接收到的信號經(jīng)過陣列處理后,可以得到一個接收信號矩陣。(假設(shè)有P個窄帶遠場信號從不同方向到達一個由N個傳感器組成的陣列,P<N,這些信號在接收陣列處被疊加。每個信號可能具有不同的頻率和相位。) - 協(xié)方差矩陣估計
首先,計算接收信號的協(xié)方差矩陣。這通常通過對接收信號矩陣進行時間平均來實現(xiàn)。協(xié)方差矩陣反映了接收到的信號之間的相關(guān)性。 - 特征分解
將協(xié)方差矩陣進行特征值分解(或奇異值分解)。這一步驟將協(xié)方差矩陣分解為多個特征向量和特征值。特征向量對應(yīng)于信號空間和噪聲空間的方向,而特征值反映了各個方向上的能量大小。
- 信號空間和噪聲空間
根據(jù)特征值的大小,將特征向量劃分為信號子空間和噪聲子空間。信號子空間由對應(yīng)于最大特征值的特征向量構(gòu)成,噪聲子空間則由其余的特征向量構(gòu)成。理論上,信號子空間的維度等于信號源的數(shù)量。
- 方向估計
利用噪聲子空間構(gòu)造一個空間譜函數(shù)。由于信號方向?qū)?yīng)的波達方向與噪聲子空間是正交的,空間譜函數(shù)在信號的DOA處會出現(xiàn)峰值。通過搜索這個空間譜函數(shù)的峰值,可以估計出信號的波達方向。
與經(jīng)典波束成形器一樣,您可以遍歷所需的θ值并找到P的最大峰值,該峰值對應(yīng)于您希望測量的到達角(參數(shù)θ)。
參考自:https://mp.weixin.qq.com/s/9XforfJUpzMQV1LQKIz7iQ
MUSIC算法特點
- 高分辨率:MUSIC算法可以實現(xiàn)超過傳統(tǒng)波束形成技術(shù)的分辨率,能夠區(qū)分非常接近的信號源。
- 要求信號源數(shù)量少于陣列元素數(shù)量:為了正確分離信號子空間和噪聲子空間,信號源的數(shù)量必須小于接收陣列的元素數(shù)量。
- 對模型參數(shù)敏感:MUSIC算法的性能依賴于對信號源數(shù)量的準確估計,以及陣列的校準。
MUSIC算法是一種強大的方向估計技術(shù),適用于多種信號處理應(yīng)用,如雷達、聲納和無線通信等領(lǐng)域。盡管它具有高分辨率的優(yōu)點,但在實際應(yīng)用中,對信號和陣列模型的要求也相對較高。
代碼
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from matplotlib.animation import FuncAnimation
sample_rate = 1e6
N = 10000 # number of samples to simulate
# Create a tone to act as the transmitted signal
t = np.arange(N)/sample_rate
f_tone = 0.02e6
tx = np.exp(2j*np.pi*f_tone*t)
# Simulate three omnidirectional antennas in a line with 1/2 wavelength between adjancent ones, receiving a signal that arrives at an angle
d = 0.5
Nr = 3
theta_degrees = 20 # direction of arrival
theta = theta_degrees / 180 * np.pi # convert to radians
a = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta))
print(a)
# we have to do a matrix multiplication of a and tx, so first lets convert both to matrix' instead of numpy arrays which dont let us do 1d matrix math
a = a.reshape(-1,1)
#print(a.shape) # 3x1
tx = tx.reshape(-1,1)
#print(tx.shape) # 10000x1
# so how do we use this? simple:
r = a.T @ tx # matrix multiply. dont get too caught up by the transpose a, the important thing is we're multiplying the array factor by the tx signal
print(r.shape) # r is now going to be a 2D array, 1d is time and 1d is spatial
# Plot the real part of the first 200 samples of all three elements
fig, (ax1) = plt.subplots(1, 1, figsize=(7, 3))
ax1.plot(np.asarray(r[0,:]).squeeze().real[0:200]) # the asarray and squeeze are just annoyances we have to do because we came from a matrix
ax1.plot(np.asarray(r[1,:]).squeeze().real[0:200])
ax1.plot(np.asarray(r[2,:]).squeeze().real[0:200])
ax1.set_ylabel("Samples")
ax1.set_xlabel("Time")
ax1.grid()
ax1.legend(['0','1','2'], loc=1)
plt.show()
# note the phase shifts, they are also there on the imaginary portions of the samples
# So far this has been simulating the recieving of a signal from a certain angle of arrival
# in your typical DOA problem you are given samples and have to estimate the angle of arrival(s)
# there are also problems where you have multiple receives signals from different directions and one is the SOI while another might be a jammer or interferer you have to null out
# One thing we didnt both doing- lets add noise to this recieved signal.
# AWGN with a phase shift applied is still AWGN so we can add it after or before the array factor is applied, doesnt really matter, we'll do it after
# we need to make sure each element gets an independent noise signal added
n = np.random.randn(Nr, N) + 1j*np.random.randn(Nr, N)
r = r + 0.1*n
fig, (ax1) = plt.subplots(1, 1, figsize=(7, 3))
ax1.plot(np.asarray(r[0,:]).squeeze().real[0:200]) # the asarray and squeeze are just annoyances we have to do because we came from a matrix
ax1.plot(np.asarray(r[1,:]).squeeze().real[0:200])
ax1.plot(np.asarray(r[2,:]).squeeze().real[0:200])
ax1.set_ylabel("Samples")
ax1.set_xlabel("Time")
ax1.grid()
ax1.legend(['0','1','2'], loc=1)
plt.show()
# MUSIC with complex scenario
if False:
# more complex scenario
Nr = 8 # 8 elements
theta1 = 20 / 180 * np.pi # convert to radians
theta2 = 25 / 180 * np.pi
theta3 = -40 / 180 * np.pi
a1 = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta1)).reshape(-1,1) # 8x1
a2 = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta2)).reshape(-1,1)
a3 = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta3)).reshape(-1,1)
# we'll use 3 different frequencies. 1xN
tone1 = np.exp(2j*np.pi*0.01e6*t).reshape(1,-1)
tone2 = np.exp(2j*np.pi*0.02e6*t).reshape(1,-1)
tone3 = np.exp(2j*np.pi*0.03e6*t).reshape(1,-1)
r = a1 @ tone1 + a2 @ tone2 + 0.1 * a3 @ tone3
n = np.random.randn(Nr, N) + 1j*np.random.randn(Nr, N)
r = r + 0.05*n # 8xN
# MUSIC Algorithm (part that doesn't change with theta_i)
num_expected_signals = 3 # Try changing this!
R = r @ r.conj().T # Calc covariance matrix, it's Nr x Nr
w, v = np.linalg.eig(R) # eigenvalue decomposition, v[:,i] is the eigenvector corresponding to the eigenvalue w[i]
if False:
fig, (ax1) = plt.subplots(1, 1, figsize=(7, 3))
ax1.plot(10*np.log10(np.abs(w)),'.-')
ax1.set_xlabel('Index')
ax1.set_ylabel('Eigenvalue [dB]')
plt.show()
#fig.savefig('../_images/doa_eigenvalues.svg', bbox_inches='tight') # I EDITED THIS ONE IN INKSCAPE!!!
exit()
eig_val_order = np.argsort(np.abs(w)) # find order of magnitude of eigenvalues
v = v[:, eig_val_order] # sort eigenvectors using this order
V = np.zeros((Nr, Nr - num_expected_signals), dtype=np.complex64) # Noise subspace is the rest of the eigenvalues
for i in range(Nr - num_expected_signals):
V[:, i] = v[:, i]
theta_scan = np.linspace(-1*np.pi, np.pi, 1000) # 100 different thetas between -180 and +180 degrees
results = []
for theta_i in theta_scan:
a = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta_i))
a = a.reshape(-1,1)
metric = 1 / (a.conj().T @ V @ V.conj().T @ a) # The main MUSIC equation
metric = np.abs(metric.squeeze()) # take magnitude
metric = 10*np.log10(metric) # convert to dB
results.append(metric)
results -= np.max(results) # normalize
fig, ax = plt.subplots(subplot_kw={'projection': 'polar'})
ax.plot(theta_scan, results) # MAKE SURE TO USE RADIAN FOR POLAR
ax.set_theta_zero_location('N') # make 0 degrees point up
ax.set_theta_direction(-1) # increase clockwise
ax.set_rlabel_position(30) # Move grid labels away from other labels
plt.show()
fig.savefig('../_images/doa_music.svg', bbox_inches='tight')
exit()
# MUSIC animation changing angle of two
if False:
Nr = 8 # 8 elements
num_expected_signals = 2
theta2s = np.arange(15,21,0.05) / 180 * np.pi
theta_scan = np.linspace(-1*np.pi, np.pi, 2000)
results = np.zeros((len(theta2s), len(theta_scan)))
for theta2s_i in range(len(theta2s)):
theta1 = 18 / 180 * np.pi # convert to radians
theta2 = theta2s[theta2s_i]
a1 = np exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta1))
a1 = a1.reshape(-1,1)
a2 = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta2))
a2 = a2.reshape(-1,1)
tone1 = np.exp(2j*np.pi*0.01e6*t)
tone1 = tone1.reshape(-1,1)
tone2 = np.exp(2j*np.pi*0.02e6*t)
tone2 = tone2.reshape(-1,1)
r = a1 @ tone1.T + a2 @ tone2.T
n = np.random.randn(Nr, N) + 1j*np.random.randn(Nr, N)
r = r + 0.01*n
R = r @ r.conj().T # Calc covariance matrix, it's Nr x Nr
w, v = np.linalg.eig(R) # eigenvalue decomposition, v[:,i] is the eigenvector corresponding to the eigenvalue w[i]
eig_val_order = np.argsort(np.abs(w)) # find order of magnitude of eigenvalues
v = v[:, eig_val_order] # sort eigenvectors using this order
V = np.zeros((Nr, Nr - num_expected_signals), dtype=np.complex64) # Noise subspace is the rest of the eigenvalues
for i in range(Nr - num_expected_signals):
V[:, i] = v[:, i]
for theta_i in range(len(theta_scan)):
a = np.exp(-2j * np.pi * d * np.arange(Nr) * np.sin(theta_scan[theta_i]))
a = a.reshape(-1,1)
metric = 1 / (a.conj().T @ V @ V.conj().T @ a) # The main MUSIC equation
metric = np.abs(metric.squeeze()) # take magnitude
metric = 10*np.log10(metric) # convert to dB
results[theta2s_i, theta_i] = metric
results[theta2s_i,:] /= np.max(results[theta2s_i,:]) # normalize
fig, ax = plt.subplots(subplot_kw={'projection': 'polar'})
fig.set_tight_layout(True)
line, = ax.plot(theta_scan, results[0,:])
ax.set_thetamin(0)
ax.set_thetamax(30)
ax.set_theta_zero_location('N') # make 0 degrees point up
ax.set_theta_direction(-1) # increase clockwise
ax.set_rlabel_position(22.5) # Move grid labels away from other labels
def update(i):
i = int(i)
print(i)
results_i = results[i,:] #/ np.max(results[i,:]) * 10 # had to add this in for the last animation because it got too large
line.set_ydata(results_i)
return line, ax
anim = FuncAnimation(fig, update, frames=np.arange(0, len(theta2s)), interval=100)
anim.save('../_images/doa_music_animation.gif', dpi=80, writer='imagemagick')
exit()
參考自:https://github.com/777arc/PySDR/blob/master/figure-generating-scripts/doa.py
Smooth-MUSIC
Smooth MUSIC算法是MUSIC算法的一種變體,主要用于提高小型陣列或者當信號源數(shù)接近陣列元素數(shù)時的性能。該方法通過對接收到的信號數(shù)據(jù)矩陣進行平滑處理來增加虛擬的陣列元素,從而增強算法的分辨能力。
工作原理:
- 數(shù)據(jù)平滑:Smooth-MUSIC通過對接收到的信號矩陣進行滑動平均或其他平滑技術(shù)處理,構(gòu)造出一個“擴展”的數(shù)據(jù)矩陣。這種處理仿佛在物理上增加了陣列接收元素的數(shù)量,有助于提高算法的分辨率。
- 子空間分解:與傳統(tǒng)MUSIC算法類似,Smooth-MUSIC也采用特征值分解(EVD)或奇異值分解(SVD)來從擴展的數(shù)據(jù)矩陣中分離出信號子空間和噪聲子空間。
- 譜峰搜索:通過構(gòu)建一個基于信號子空間和噪聲子空間正交性的譜函數(shù),并在感興趣的方向范圍內(nèi)搜索該函數(shù)的峰值,從而估計信號源的方向。
優(yōu)點
- 提高分辨率:對于小陣列或信號源數(shù)接近陣列元素數(shù)的情況,Smooth-MUSIC能夠提供比傳統(tǒng)MUSIC更高的分辨率。
- 強化性能:平滑處理有助于減少估計方向時的方差,使得算法在低信噪比條件下也能保持較好的性能。
缺點
- 復雜度增加:平滑處理增加了計算復雜度,尤其是在處理大規(guī)模數(shù)據(jù)時。
- 設(shè)計選擇:平滑窗口的大小和類型需要根據(jù)具體情況仔細選擇,以平衡性能和復雜度。
Cyclic-MUSIC
Cyclic-MUSIC是MUSIC算法的另一種變體,專門設(shè)計用于處理具有周期性的信號。這種方法利用信號的周期性特性來提高頻域信號源的定位精度。
工作原理
- 周期性信號分析:Cyclic-MUSIC算法利用信號的周期性統(tǒng)計特性來進行信號源定位。通過分析接收信號的循環(huán)譜,可以提取出信號的周期性特征,這對于某些特定類型的信號(如通信信號)非常有用。
- 子空間分解:與傳統(tǒng)MUSIC算法一樣,Cyclic-MUSIC也使用子空間方法。不同之處在于,它在循環(huán)譜域內(nèi)進行操作,根據(jù)信號的循環(huán)頻率來區(qū)分信號子空間和噪聲子空間。
- DOA估計:通過構(gòu)造基于循環(huán)譜的譜峰搜索函數(shù),Cyclic-MUSIC可以精確估計具有周期性特征的信號源方向。
其他MUSIC變體
- Root-MUSIC:適用于等間距線性陣列,通過求解特定多項式的根來直接估計信號方向。這種方法可以降低計算復雜度,提高計算速度。
- ESPRIT:利用陣列元素間的轉(zhuǎn)動不變性,無需進行多維搜索即可估計信號源的方向。ESPRIT算法在計算效率上通常優(yōu)于傳統(tǒng)MUSIC,但需要特定的陣列幾何結(jié)構(gòu)。
- Min-Norm:這種方法通過最小化一個基于噪聲子空間的投影算子的范數(shù)來估計信號方向。它特別適用于信號源數(shù)量較少時的場景。
- 2D-MUSIC & 3D-MUSIC:針對二維空間和三維空間信號源定位問題的MUSIC算法擴展。這些方法能夠估計信號源的方位角和俯仰角,適用于更復雜的空間信號處理任務(wù)。
- Focused-MUSIC:針對具有高分辨率需求的應(yīng)用進行了優(yōu)化。通過在感興趣的區(qū)域內(nèi)集中處理能力,可以更加精確地估計那些區(qū)域內(nèi)的信號源方向。