Member since Nov 2018
40 posts

20190125, 12:39 #1
Subject: Question regarding Problem 10.2 and Challenge
+1 tyr
We were made aware of an objection to the reference solution for Problem 10.2. The reference solution was:
"Given a formula in DNF, a valid assignment only needs to satisfy any of the conjunctive clauses. A conjunctive clause is satisﬁable iﬀ it does not contain two literals of the forms A and ¬A. If a conjunctive clause is satisﬁable, the literals in the clause yield a valid partial assignment that can be arbitrarily extended on the variables not occuring in the clause. All of this can be easily checked in polynomial time." The objection is "All known algorithms to convert arbitrary formulas into DNF can result in DNFs exponentially longer than the input formula. To find a partial assignment, in the worst case, we have to check the whole DNF. Therefore, in the worst case, this takes exponential time in the length of the input formula." This objection to the solution is correct. We have thought of two ways to fix the solution: 1. We have convinced ourselves (but not proven) that without increasing complexity, a simple exponential algorithm (using conjunction distribution over disjunction) to transform CNFs (wlog) to DNFs can be amended to delete all instances of clauses containing literals of the form A and ¬A. Edit: We suspect this holds for any algorithm. Hence we can assume wlog that the output does not contain such clauses and thereby we can take any clause (e.g. the first one) as a partial assignment without having to read the whole DNF. Challenge: 30 Bonus Points if somebody proves that the complexity of any algorithm that transforms CNFs into equivalent DNFs does indeed not increase by deleting inconsistent clauses. 2. Another way is to assess the hardness of the search problem directly instead of going through the hardness of the decision problem (the reference solution implicitly invokes the decision problem of deciding whether a given DNF is equivalent to the input formula). Denote the search problem of translating an arbitrary formula \phi into DNF by "TODNF". We want to show that TODNF is NPhard. Suppose for contradiction that TODNF is not NPhard (that is, it is easier than some problem in NP). Then it must be in NP. By the definition of NP there exists a nondeterministic Turing machine that can solve TODNF in polynomial time. This includes writing down the DNF so the DNF has to be polynomial in length of \phi. But then we can read off an assignment for\phi in polynomial time and solve SAT. But SAT is NPhard. Contradiction. Therefore TODNF is NPhard. 
This post was edited on 20190125, 16:00 by rappatoni.

Member since Nov 2016
18 posts

20190125, 13:14 #2
I'm not quite sure I understood the challenge.
You want us to prove that an exponential algorithm that transforms CNF into DNF won't get more complex by adding a step that deletes trivially false subformulas? If yes, then why? 
Member since Nov 2018
40 posts

20190125, 14:09 #3
Why do I want you to prove it (a) or why would it get more complex (b)? a) Firstly, because it would close the hole in the new reference solution and secondly, why not ? b) You need to check the clauses somehow, that takes some time, memory etc.. We don't think it will get more complex but were too lazy to prove it. Maybe you are not Edit: I meant "too busy" of course. 
Member since Jul 2016
93 posts

20190125, 14:29 #4
Aren't both algorithms basically the same?
Alg 1) takes a CNF transforms it to a DNF, looks at all clauses until it finds a noncontradicting clause, ignoring all contradicting clauses. We can also interpret the "ignoring" as "deleting", since we can just discard ignored clauses. Alg 2) takes a CNF transforms it to a DNF, deletes all contradicting clauses, then looks at the first one. The total runtime of Alg 2) is always more than Alg 1), since Alg 1) stops after one noncontradicting clause. In BigO the runtimes are trivially equal? 
Member since Nov 2016
18 posts

20190125, 15:45 #5
So you want to: (1) CNF > DNF (2) Find DNFSAT There are CNF formulae for which the smallest DNF is already exponential in space. So the worst case is STILL going to happen in (1) even before we begin reading the DNF (2). And even if there are DNF formulae (without minimizing) for which we have to go through the whole formula which could be exponential. We already had to write on exponential space, so reading that same space would take about the same amount again, and doing something twice is like once as far as complexity is concerned. So the challenge only helps for some inputs. And that makes no sense to me. 
Member since Nov 2018
40 posts

20190125, 15:46 #6
In reply to post #4
Yes, but the intended algorithms are NOT ones to decide the SAT problem but ones to translate a formula to DNF. Translating their output into an assignment is the reduction you ought to construct. So your job is to prove that whatever the complexity of the original formulatoDNF algorithm, it does not increase by also deleting the inconsistent clauses. The argument to conclude that formulatoDNF is NPhard then goes as follows: We can reduce SAT to formulatoDNF without inconsistent clauses. We know that formulatoDNF without inconsistent clauses is no harder than formulatoDNF. Hence formulatoDNF is NPhard. In effect you are just moving the complexity from the reduction to the translation algorithm and using a generalized reduction argument to show that that does not make the translation algorithm any harder than it is anyway. It actually becomes quite similar to the second proof I described. I think your argument about outputs already almost gets you there. 
Member since Nov 2018
40 posts

20190125, 15:56 #7
In reply to post #5
It's perfectly fine if the worst case happens in (1), we don't care about that whatsoever (In fact that is kind of what we want to prove). Our goal is to prove that the search problem (1) solves is NPhard, no matter which algorithm we choose. We don't care about (1)'s complexity. The only thing we need to prove is that no matter how (1) works and whatever its complexity is, removing inconsistent clauses does not make it any harder. 
Member since Nov 2016
18 posts

20190125, 16:11 #8
How about this:
What happens to the complexity of the whole algorithm if we just don't minimize the DNF? As I see it the complexity will be exponential. What happens if we minimize it? It stays exponential if your hypothesis is true, and MORE if not (which I also don't think to be true). 
Member since Nov 2018
40 posts

20190128, 05:33 #9
How do you know that the algorithm must be exponential, that is TODNF is EXPTIMEcomplete? We do know that known algorithms produce exponential DNFs. That does not mean that all possible algorithms do. Also see my edits to the problem statement to clarify things.
Correct. If it were more, then we could conclude nothing about the original TODNFproblem and hence the proof under "1." would fail. 
Member since Oct 2016
89 posts

20190128, 07:35 #10
However, in order to do this, we would need a NTM (nondeterministic turing machine) that could write at least 2^n characters (where n is the size of the CNF) in polynomial time, which is kind of a contradiction. (To proof this, as I did, one can look at the minimized DNF of (X_1 \/ Y_1) /\ (X_2 \/ Y_2) /\ ... /\ (X_n \/ Y_n), where X_1, ..., X_n, Y_1, ..., Y_n are partial different variables, that has a DMF of size 2^n terms. Therefore it just cannot be that any turing machine can give the output in polynomial time as far as I understood complexity in BFS.) 
Member since Nov 2018
40 posts

20190129, 14:51 #11
That is correct. I was not aware that the minimal equivalent DNF can be shown to be exponential in some cases (There is a proof here: https://math.stackexchange.com/questions/2732482/exponenti… (disregard the misleading title of the thread)). Proving that is also a proof of NPhardness of the DNFtranslation btw. However, you don't need this knowledge for the challenge as far as I can tell. 
Datenschutz 
Kontakt
Powered by the Unclassified NewsBoard software, 20150713dev,
© 20032011 by Yves Goergen