神经网络实战之dsp实现神经网络vad-1
vad神经网络有很多不同的实现这里的神经网络是基于pytorch实现的网络结构如下classMiniVAD(nn.Module):def__init__(self,n_fft512):super().__init__()self.input48#输入B T 48# 融合层self.fusionnn.Sequential(nn.Linear(self.input,32),#B T 32nn.ReLU())# 决策层self.rnnnn.GRU(32,64,batch_firstTrue,bidirectionalFalse)self.classifiernn.Sequential(nn.Linear(64,2),#nn.Sigmoid())self.statetorch.zeros(1,1,64)self.state1torch.zeros(1,1,64,dtypetorch.int32)defforward(self,combined):#输入B T CB T 48vad_probtorch.zeros(1,combinedx.shape[1],2)forninrange(combinedx.shape[1]):combinedcombinedx[:,n:n1,:]combinedquant_fixed(combined,23)combinedcombined/2**23combinedtorch.tensor(combined).float()fusedself.fusion[0](combined)# [B,T,32]fusedself.fusion[1](fused)# [B,T,32]# 时序建模fusedquant_fixed(fused,23)fusedfused/2**23fusedtorch.tensor(fused).float()rnn_out,self.stateself.rnn(fused,self.state.clone())# [B,T,128]# 分类rnn_outquant_fixed(rnn_out,23)rnn_outrnn_out/2**23rnn_outtorch.tensor(rnn_out).float()tmpself.classifier(rnn_out)# [B,T,1]tmpquant_fixed(tmp,23)tmptmp/2**23tmptorch.tensor(tmp).float()vad_prob[:,n:n1,:]tmpreturnvad_prob.squeeze(-1)summary打印的网络拓扑:Layer (type:depth-idx) Output Shape Param #MiniVAD [1, 65, 2] –├─Sequential: 1-193 – (recursive)│ └─Linear: 2-1 [1, 1, 32] 1,568│ └─ReLU: 2-2 [1, 1, 32] –├─GRU: 1-2 [1, 1, 64] 18,816├─Sequential: 1-3 [1, 1, 2] –│ └─Linear: 2-3 [1, 1, 2] 130├─Sequential: 1-193 – (recursive)│ └─Linear: 2-4 [1, 1, 32] (recursive)│ └─ReLU: 2-5 [1, 1, 32] –├─GRU: 1-5 [1, 1, 64] (recursive)├─Sequential: 1-6 [1, 1, 2] (recursive)│ └─Linear: 2-6 [1, 1, 2] (recursive)
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2451672.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!