Band Selection Algorithms and Heterogeneous Multi-Classifier Schemes for Enhanced Classification Accuracy of Hyper- Spectral Images

Author
  • Arvind Kumar Singh

    FET

    Faculty of Engineering and Technology

Abstract

In this research, adaptive Two-Dimensional Multilayer Neural Network (TDMNN) architecture is proposed, designed and implemented for image compression and decompression. The adaptive TDMNN architecture performs image compression and decompression by automatically choosing one of the three (Linear, nonlinear and hybrid) TDMNN architectures based on input image entropy and required compression ratio. The architecture is two-dimensional, 2D to 1D reordering of input image is avoided, as the TDMNN architecture is implemented using hybrid neural network, analog to digital conversion of image input is eliminated. The architecture is trained to reconstruct images in the presence of noise as channel errors.

Software reference model for Adaptive TDMNN architecture is designed and modeled using Matlab. Modified backpropagation algorithm that can train two-dimensional network is proposed and is used to train the TDMNN architecture. Performance metrics such as Mean Square Error (MSE) and Peak Signal to Noise Raion (PSNR) are computed and compared with well established DWT-SPIHT technique. There is 10% to 25% improvement in reconstructed image quality measured in terms of MSE and PSNR compared to DWT-SPIHT technique. Software reference model results show that the compression and decompression time for TDMN architecture is less than 25 ms for image of size 256 x 256, which is 60 times faster than DWT-SPIHT technique.

Based on weights and biases of the network obtained from the software reference model VLSI implementation of adaptive TDMNN architecture is carried out. A new hybrid multiplying DAC is designed that multiplies current intensities (analog input) with digital weights. The hybrid multiplier is integrated with adder and network function to realize a hybrid neuron cell. The hybrid neuron cell. The hybrid neuron cell designed using 1420 transistors works at 200 MHz, consuming less than 232 mW of power, with full scale current of 65.535 µ A. Multiple hybrid neurons are integrated together to realize the 2‑D adaptive multilayer neural network architecture.

Conclusion

Conclusion and Recommendations for Future Work

In this research work, adaptive two-dimensional multilayer neural network architecture has been proposed, designed, modelled, simulated and verified using both the software and hardware models. The adaptive architecture proposed automatically selects one of the three 2‑D multilayered neural network architectures based on image entropy and Bits Per Pixel (bpp). This architecture eliminates the need for 2‑D to 1‑D reordering of image samples and also eliminates the need for analog to digital conversion of image intensities. A modified backpropagation algorithm that is suitable to train the 2‑D multilayer network architecture is proposed and is used to train the TDMNN architecture. Software reference model for Adaptive TDMNN architecture is developed. MSE, PSNR and Maximum Error for various images are computed using the developed software reference model, Network parameters such as the number of hidden layers, number of neurons in each layer, input sub-block of image size 4 x 4 is optimum in terms of network performance and computation time. Input block size restricted to 4 x 4 was selected as a tradeoff between complexity and quality. Noise analysis carried out on TDMNN shows that the network has 2 to 25 times improvement compared to DWT-SPIHT technique. Error analysis of the TDMNN architecture reveals 10 to 30 times improvement over DWT-SPIHT technique.

Initially TDMNN architecture was designed for image compression and decompression. Performances of three TDMNN architectures (Linear, Nonlinear and Hybrid) were analyzed. From the results it was found that for 0.5 bpp hybrid networks achieves better PSNR and MSE compared to linear network. At 7.5 bpp, linear network performs better than hybrid network. For bpp between 2 and 5, the hybrid network achieves better performance compared to linear network. Also the nonlinear network was able to achieve better results compared to the linear and hybrid networks. Based on the results, it was concluded that the network performance is a function of the image. Hence, in order to achieve better performance compared to the conventional techniques, adaptive TDMNN architecture is proposed. In this architecture, Entropy of input image is computed. Based on the Entropy and the required compression ratio, the control unit automatically selects appropriate TDMNN architecture for image compression and decompression. Adaptive TDMNN architecture is three to ten times better than TDMNN in terms of quality metrics such as MSE and PSNR. MSE and PSNR results obtained for adaptive TDMNN architecture for 4 bpp and less than 4 bpp reveals three times better compared with DWT-SPIHT results. Software simulation results show that Adaptive TDMNN architecture is 60 times faster than DWT-SPIHT. Network is trained with multiple image data sets to generalize the network for compression of various images.

Based on the software reference model developed, basic building blocks for Adaptive TDMNN architecture are identified for VLSI implementation. Multipliers, adders and network functions are the three major building blocks for adaptive TDMNN architecture. VLSI implementation of TDMNN building blocks are carried out using industry standard tools. Analog network architecture is designed and implemented for image compression and decompression. Three different neuron cells (Gilbert cell based neuron, modified Gilbert cell based neuron and hybrid neuron cell) have been designed and analyzed for its area, timing and power performances. Hybrid neuron cell is selected as it is found to be more suitable for VLSI implementation. Hybrid neuron cell is selected as it is found to be more suitable for VLSI implementation. Hybrid cell multiplies analog samples with digital weights and hence called as hybrid neuron cell. Hybrid neuron cell consists of multiplier, adder and network function. There is a need of 128 multipliers per network for adaptive TDMNN architecture (hidden layer and output layer). A modified multiplying DAC architecture based multiplier is designed involving a total number of 2816 transistors. The area for the multiplier is 7884 µm². Hybrid cell (based on modified MDAC) is designed using NMOS transistors work at 200 MHz of clock frequency. It consumes 232 mW of power at the maximum full scale current of 65.535 µA. Weights and biases obtained during training are stored in Read only Memory (ROM). Test setup for verifying the TDMNN architecture as compressor and decompressor is designed. Image sizes 64 x 64 to 8 x 8 were used to test the network performance. MSE computed for different image sizes varies from 21 to 14. These results were validated against the software reference model and the difference hardware and software models was less than 10%. The network was tested for various compression ratios and the results obtained were found to agree with software reference model. A full chip design of the proposed architecture was implemented using Cadence Virtuoso, and the physical verification was carried out using Assura. DRC and LVS checks were performed and then GDSII was generated for chip fabrication.