Journal of VLSI signal processing systems for signal, image and video technology

  0922-5773

 

 

Cơ quản chủ quản:  N/A

Lĩnh vực:

Phân tích ảnh hưởng

Thông tin về tạp chí

 

Các bài báo tiêu biểu

Segmentation of Multi-Channel Image with Markov Random Field Based Active Contour Model
Tập 31 - Trang 45-55 - 2002
Dongxiang Xu, Jenq-Neng Hwang, Chun Yuan
Segmentation is an important research area in image processing and computer vision. The essential purpose of research work is to achieve two goals: (i) partition the image into homogeneous regions based on certain properties, and (ii) accurately track the boundary for each region. In this study, we will present a novel framework that is designed to fulfill these requirements. Distinguished from most existing approaches, our method consists of three steps in the segmentation processes: global region segmentation, control points searching and object boundary tracking. In step one, we apply Markov Random Field (MRF) modeling to multi-channel images and propose a robust energy minimization approach to solve the multi-dimensional Markov Random Field. In step two, control points are found along the target region boundary by using a maximum reliability criterion and deployed to automatically initialize a Minimum Path Approach (MPA). Finally, the active contour evolves to the optimal solution in the fine-tuning process. In this study, we have applied this framework to color images and multi-contrast weighting magnetic resonance image data. The experimental results show encouraging performance. Moreover, the proposed approach also has the potential to deal with topology changing and composite object problems in boundary tracking.
Design of a Cellular Architecture for Fast Computation of the Skeleton
Tập 35 - Trang 61-73 - 2003
N. Sudha
This paper presents a new algorithm to extract the skeleton and its Euclidean distance values from a binary image. The extracted skeleton reconstructs the objects in the image exactly. The algorithm runs in O(n) time for an image of size n × n. It involves simple local neighborhood operations for each pixel and hence it is quite amenable to VLSI implementation in a cellular architecture. Results of simulation of the algorithm in a sequential computer are presented. Results of implementation of a VLSI design in Xilinx FPGA are also presented and they confirm the speed and suitability of our method for real-time applications.
Automatic Generation of Modular Time-Space Mappings and Data Alignments
Tập 19 - Trang 195-208 - 1998
Hyuk-Jae Lee, José A.B. Fortes
Time-space transformations and data alignments that can lead to efficient execution of parallel programs have been extensively studied. Recently, modular time-space transformations have been proposed to generate a class of algorithm mappings that cannot be described by linear time-space transformations. This paper proposes a new class of data alignments, called expanded modular data alignments (EMDAs), for programs that result from modular time-space transformations. An EMDA subsumes multiple modular data alignments, which are described by affine functions modulo a constant vector. Conditions of a modular time-space mapping and an EMDA for perfect alignment are described. However, these conditions together with other conditions for validity and optimality of a modular mapping introduce nonlinear constraints in the problem of generating modular mappings. A method of O(n 2) complexity is provided to choose some entries of a transformation matrix so that nonlinear constraints are transformed into linear ones, where n is the dimension of the computation domain (e.g., the number of nested loops). Although the solution space of the problem is reduced by assigning fixed values to some entries, the proposed heuristic attempts to minimize the number of the fixed entries and consequently to exclude as few solutions as possible.
Fault diagnosis in reconfigurable VLSI and WSI processor arrays
Tập 2 - Trang 173-187 - 1990
Sy-Yen Kuo, Kuochen Wang
A systematic efficient fault diagnosis method for reconfigurable VLSI/WSI array architectures is presented. The basic idea is to utilize the output data path independence among a subset of processing elements (PEs) based on the topology of the array under test. The “divide and conquer” technique is applied to reduce the complexity of test application and enhance the controllability and observability of a processor array. The array under test is divided into nonoverlapping diagnosis blocks. Those PEs in the same diagnosis block can be diagnosed concurrently. The problem of finding diagnosis blocks is shown equivalent to a generalizedEight Queens problem. Three types of PEs and one type of switches, which are designed to be easily testable and reconfigurable, are used to show how to apply this approach. The main contribution of this paper is an efficient switch and link testing procedure, and a novel PE fault diagnosis approach which can speed up the testing by at leastO(⋎V⋎1/2) for the processor arrays considered in this paper, where ⋎V⋎ is the number of PEs. The significance of our approach is the ability to detect as well as to locate multiple PE, switch, and link faults with little or no hardware overhead.
A New Algorithm for the Elimination of Common Subexpressions in Hardware Implementation of Digital Filters by Using Genetic Programming
Tập 31 - Trang 91-100 - 2002
H. Safiri, M. Ahmadi, G.A. Jullien, W.C. Miller
A new algorithm based on Genetic Programming (GP) for the problem of optimization of Multiple constant Multiplication (MCM) by Common Subexpression Elimination(CSE) is developed. This method is used for hardware optimization for DSP systems. A solution based on Genetic Programming is shown in this paper. The performance of the technique is demonstrated in one- and multi-dimensional digital filters with constant coefficients.
A Low Complexity and Low Power SoC Design Architecture for Adaptive MAI Suppression in CDMA Systems
Tập 44 - Trang 195-217 - 2006
Yuanbin Guo, Joseph R. Cavallaro
In this paper, we propose a reduced complexity and power efficient System-on-Chip (SoC) architecture for adaptive interference suppression in CDMA systems. The adaptive Parallel-Residue-Compensation architecture leads to significant performance gain over the conventional interference cancellation algorithms. The multi-code commonality is explored to avoid the direct Interference Cancellation (IC), which reduces the IC complexity from $\mathcal{O}(K^2N)$ to $\mathcal{O}(KN)$ . The physical meaning of the complete versus weighted IC is applied to clip the weights above a certain threshold so as to reduce the VLSI circuit activity rate. Novel scalable SoC architectures based on simple combinational logic are proposed to eliminate dedicated multipliers with at least $10 \times$ saving in hardware resource. A Catapult C High Level Synthesis methodology is apply to explore the VLSI design space extensively and achieve at least $4 \times$ speedup. Multi-stage Convergence-Masking-Vector combined with clock gating is proposed to reduce the VLSI dynamic power consumption by up to $90 \%$
Probabilistic complexity analysis for a class of approximate DFT algorithms
Tập 14 - Trang 193-205 - 1996
Joseph M. Winograd, S. Hamid Nawab
We present a probabilistic complexity analysis of a class of multi-stage algorithms which incrementally refine DFT approximations. Each stage of any algorithm in this class improves the results of the previous stage by a fixed increment in one of three dimensions: SNR, frequency resolution, or frequency coverage. However, the complexity of each stage is probabilistically dependent upon certain characteristics of the input signal. Assuming that an algorithm has to be terminated before its arithmetic cost exceeds a given limit, we have formulated a method for predicting the probability of completion of each of the algorithm's stages. This analysis is useful for low-power and real-time applications where FFT algorithms cannot meet the specified limits on arithmetic cost.
Guest Editorial
Tập 31 - Trang 75-76 - 2002
Michael J. Schulte, Graham A. Jullien
Foveation-Based Error Resilience and Unequal Error Protection over Mobile Networks
- 2003
Sanghoon Lee, Chris Podilchuk, Vidhya Krishnan, Alan C. Bovik
By exploiting new human-machine interface techniques, such as visual eyetrackers, it should be possible to develop more efficient visual multimedia services associated with low bandwidth, dynamic channel adaptation and robust visual data transmission. In this paper, we introduce foveation-based error resilience and unequal error protection techniques over highly error-prone mobile networks. Each frame is spatially divided into foveated and background layers according to perceptual importance. Perceptual importance is determined either through an eye tracker or by manually selecting a region of interest. We attempt to improve reconstructed visual quality by maintaining the high visual source throughput of the foveated layer using foveation-based error resilience and error correction using a combination of turbo codes and ARQ (automatic reQuest). In order to alleviate the degradation of visual quality, a foveation based bitstream partitioning is developed. In an effort to increase the source throughput of the foveated layer, we develop unequal delay-constrained ARQ (automatic reQuest) and rate compatible punctured turbo codes where the punctual pattern of RCPC (rate compatible punctured convolutional) codes in H.223 Annex C is used. In the simulation, the visual quality is significantly increased in the area of interest using foveation-based error resilience and unequal error protection; (as much as 3 dB FPSNR (foveal peak signal to noise ratio) improvement) at 40% packet error rate. Over real-fading statistics measured in the downtown area of Austin, Texas, the visual quality is increased up to 1.5 dB in PSNR and 1.8 dB in FPSNR at a channel SNR of 5 dB.
Discriminatory Mining of Gene Expression Microarray Data
Tập 35 - Trang 255-272 - 2003
Zuyi Wang, Yue Wang, Jianping Lu, Sun-Yuan Kung, Junying Zhang, Richard Lee, Jianhua Xuan, Javed Khan, Robert Clarke
Recent advances in machine learning and pattern recognition methods provide new analytical tools to explore high dimensional gene expression microarray data. Our data mining software, VISual Data Analyzer for cluster discovery (VISDA), reveals many distinguishing patterns among gene expression profiles, which are responsible for the cell's phenotypes. The model-supported exploration of high-dimensional data space is achieved through two complementary schemes: dimensionality reduction by discriminatory data projection and cluster decomposition by soft data clustering. Reducing dimensionality generates the visualization of the complete data set at the top level. This data set is then partitioned into subclusters that can consequently be visualized at lower levels and if necessary partitioned again. In this paper, three different algorithms are evaluated in their abilities to reduce dimensionality and to visualize data sets: Principal Component Analysis (PCA), Discriminatory Component Analysis (DCA), and Projection Pursuit Method (PPM). The partitioning into subclusters uses the Expectation-Maximization (EM) algorithm and the hierarchical normal mixture model that is selected by the user and verified “optimally” by the Minimum Description Length (MDL) criterion. These approaches produce different visualizations that are compared against known phenotypes from the microarray experiments. Overall, these algorithms and user-selected models explore the high dimensional data where standard analyses may not be sufficient.