Theory of Computing Systems

Công bố khoa học tiêu biểu

* Dữ liệu chỉ mang tính chất tham khảo

Sắp xếp:  
A Burnside Approach to the Finite Substitution Problem
Theory of Computing Systems - Tập 39 - Trang 15-50 - 2005
Daniel Kirsten
We introduce a new model of weighted automata: the desert automata. We show that their limitedness problem is PSPACE-complete by solving the underlying Burnside problem. As an application of this result, we give a positive solution to the so-called finite substitution problem which was open for more than 10 years: given recognizable languages K and L, decide whether there exists a finite substitution σ such that σ(K) = L.
On the Partial Vertex Cover Problem in Bipartite Graphs - a Parameterized Perspective
Theory of Computing Systems - - Trang 1-22 - 2023
Vahan Mkrtchyan, Garik Petrosyan, K. Subramani, Piotr Wojciechowski
In this paper, we examine variants of the partial vertex cover problem from the perspective of parameterized algorithms. Recall that in the classical vertex cover problem (VC), we are given a graph $$\mathbf{G = \langle V, E \rangle }$$ and a number k and asked if we can cover all of the edges in $$\textbf{E}$$ , using at most k vertices from $$\textbf{V}$$ . The partial vertex cover problem (PVC) is a more general version of the VC problem in which we are given an additional parameter $$k'$$ . We then ask the question of whether at least $$k'$$ of the edges in $$\textbf{E}$$ can be covered using at most k vertices from $$\textbf{V}$$ . Note that the VC problem is a special case of the PVC problem when $$k'=|\textbf{E}|$$ . In this paper, we study the weighted generalizations of the PVC problem. This is called the weighted partial vertex cover problem (WPVC). In the WPVC problem, we are given two parameters R and L, associated respectively with the vertex set $$\textbf{V}$$ and edge set $$\textbf{E}$$ of the graph $$\textbf{G}$$ respectively. Additionally, we are given non-negative integral weight functions for the vertices and the edges. The goal then is to cover edges of total weight at least L, using vertices of total weight at most R. This paper studies several variants of the PVC and WPVC problems and establishes new results from the perspective of fixed-parameter tractability and W[1]-hardness. We also introduce a new problem called the partial vertex cover with matching constraints and show that it is Fixed-Parameter Tractable (FPT) for a certain class of graphs. Finally, we show that the WPVC problem is APX-complete for bipartite graphs.
On the Average Case of MergeInsertion
Theory of Computing Systems - Tập 64 - Trang 1197-1224 - 2020
Florian Stober, Armin Weiß
MergeInsertion, also known as the Ford-Johnson algorithm, is a sorting algorithm which, up to today, for many input sizes achieves the best known upper bound on the number of comparisons. Indeed, it gets extremely close to the information-theoretic lower bound. While the worst-case behavior is well understood, only little is known about the average case. This work takes a closer look at the average case behavior. In particular, we establish an upper bound of $n \log n - 1.4005n + o(n)$ comparisons. We also give an exact description of the probability distribution of the length of the chain a given element is inserted into and use it to approximate the average number of comparisons numerically. Moreover, we compute the exact average number of comparisons for n up to 148. Furthermore, we experimentally explore the impact of different decision trees for binary insertion. To conclude, we conduct experiments showing that a slightly different insertion order leads to a better average case and we compare the algorithm to Manacher’s combination of merging and MergeInsertion as well as to the recent combined algorithm with (1,2)-Insertionsort by Iwama and Teruyama.
On the Autoreducibility of Functions
Theory of Computing Systems - Tập 46 - Trang 222-245 - 2008
Piotr Faliszewski, Mitsunori Ogihara
This paper studies the notions of self-reducibility and autoreducibility. Our main result regarding length-decreasing self-reducibility is that any complexity class $\mathcal{C}$ that has a (logspace) complete language and is closed under polynomial-time (logspace) padding has the property that if all $\mathcal{C}$ -complete languages are length-decreasing (logspace) self-reducible then $\mathcal{C}\subseteq \mathrm {P}$ ( $\mathcal {C}\subseteq \mathrm {L}$ ). In particular, this result applies to NL, NP and PSPACE. We also prove an equivalent of this theorem for function classes (for example, for #P). We also show that for several hard function classes, in particular for #P, it is the case that all their complete functions are deterministically autoreducible. In particular, we show the following result. Let f be a #P parsimonious function with two preimages of 0. We show that there are two FP functions h and t such that for all inputs x we have f(x)=t(x)+f(h(x)), h(x)≠x, and t(x)∈{0,1}. Our results regarding single-query autoreducibility of #P functions can be contrasted with random self-reducibility for which it is known that if a #P complete function were random self-reducible with one query then the polynomial hierarchy would collapse.
The range of a vector measure with values in a montel space
Theory of Computing Systems - Tập 5 Số 2 - Trang 145-147 - 1971
John S. Lew
Recognizing binary Hamming graphs inO(n 2 logn) time
Theory of Computing Systems - - 1995
Franz Aurenhammer, Johann Hagauer
Size, index, and context-sensitivity of controlled partition grammars
Theory of Computing Systems - Tập 11 - Trang 47-60 - 1977
Eva-Maria Mückstein Wotschke, Detlef Wotschke, Peter J. Downey
Controlled partition grammars (CPGs) were designed to apply to certain needs of linguists. General CPGs generate exactly all context-sensitive languages. CPGs have two parameters:size andindex. The partition index of CPGs can be bounded by two, while CPGs with partition index one generate exactly the class of context-free languages. The size (of the partition blocks) of CPGs can be bounded by two, while CPGs of size one generate a class of languages properly contained in the class of contextsensitive languages. If one can eliminate recursive productions of the formA→B in a CPG then deterministic and nondeterministic lba's are equivalent.
Chinese Remaindering with Multiplicative Noise
Theory of Computing Systems - Tập 40 - Trang 33-41 - 2005
Igor E. Shparlinski, Ron Steinfeld
We use lattice reduction to obtain a polynomial-time algorithm for recovering an integer (up to a multiple) given multiples of its residues modulo sufficiently many primes, when the multipliers are unknown but small.
Spectral Sparsification in the Semi-streaming Setting
Theory of Computing Systems - Tập 53 - Trang 243-262 - 2012
Jonathan A. Kelner, Alex Levin
Let G be a graph with n vertices and m edges. A sparsifier of G is a sparse graph on the same vertex set approximating G in some natural way. It allows us to say useful things about G while considering much fewer than m edges. The strongest commonly-used notion of sparsification is spectral sparsification; H is a spectral sparsifier of G if the quadratic forms induced by the Laplacians of G and H approximate one another well. This notion is strictly stronger than the earlier concept of combinatorial sparsification. In this paper, we consider a semi-streaming setting, where we have only $\tilde{O}(n)$ storage space, and we thus cannot keep all of G. In this case, maintaining a sparsifier instead gives us a useful approximation to G, allowing us to answer certain questions about the original graph without storing all of it. We introduce an algorithm for constructing a spectral sparsifier of G with O(nlogn/ϵ 2) edges (where ϵ is a parameter measuring the quality of the sparsifier), taking $\tilde{O}(m)$ time and requiring only one pass over G. In addition, our algorithm has the property that it maintains at all times a valid sparsifier for the subgraph of G that we have received. Our algorithm is natural and conceptually simple. As we read edges of G, we add them to the sparsifier H. Whenever H gets too big, we resparsify it in $\tilde{O}(n)$ time. Adding edges to a graph changes the structure of its sparsifier’s restriction to the already existing edges. It would thus seem that the above procedure would cause errors to compound each time that we resparsify, and that we should need to either retain significantly more information or reexamine previously discarded edges in order to construct the new sparsifier. However, we show how to use the information contained in H to perform this resparsification using only the edges retained by earlier steps in nearly linear time.
Semiflows associated with compact and uniform processes
Theory of Computing Systems - Tập 8 - Trang 142-149 - 1974
Constantine M. Dafermos
Tổng số: 1,577   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 10