Exam Cheatsheet
Logical Equivalences:
$((p \implies q) \land p) \vdash q$
$((p \implies q) \land \neg q) \vdash \neg p$
$((p \implies q) \land(q \implies r)) \vdash(p \implies r)$
$((p \lor q) \land \neg p) \vdash q$
$((p \implies q) \land(r \implies s) \land(\neg q \lor \neg s)) \vdash(\neg p \lor \neg r)$
$((p \implies q) \land(p \implies r)) \vdash(p \implies(q \land r))$
$\neg(p \land q) \vdash(\neg p \lor \neg q)$
$\neg(p \lor q) \vdash(\neg p \land \neg q)$
$(p \lor q) \vdash(q \lor p)$
$(p \land q) \vdash(q \land p)$
$(p \iff q) \vdash(q \iff p)$
$(p \lor(q \lor r)) \vdash((p \lor q) \lor r)$
$(p \land(q \land r)) \vdash((p \land q) \land r)$
$(p \land(q \lor r)) \vdash((p \land q) \lor(p \land r))$
$(p \lor(q \land r)) \vdash((p \lor q) \land(p \lor r))$
$(p \implies q) \vdash(\neg q \implies \neg p)$
$(p \implies q) \vdash(\neg p \lor q)$
$(p \iff q) \vdash((p \implies q) \land(q \implies p))$
$(p \iff q) \vdash((p \land q) \lor(\neg p \land \neg q))$
$(p \iff q) \vdash((p \lor \neg q) \land(\neg p \lor q))$
$((p \land q) \implies r) \vdash(p \implies(q \implies r))$
$(p \implies(q \implies r)) \vdash((p \land q) \implies r)$
$(p \land q) \vdash p$
$p \vdash(p \lor p)$
$p \vdash(p \land p)$
$p \vdash(p \lor q)$
Logical entailment:
$ \alpha \vDash \beta $ if and only if $(\alpha \lor \lnot \beta)$ is unsatisfiable.
Probability
$$
P(A \mid B) = \frac{P(A \cap B)}{P(B)}
$$
$$
P(A \cap B) = P(A \mid B) * P(B)
$$
$$
P(A \cup B) = P(A) + P(B) - P(A \cap B)
$$
TODO: Conditional independence
TODO: Mutual independence
TODO: Pairwise independence
- Note: Pairwise independence does not guarantee of mutual independence.
Midterm Review
What is on the assignments is a good representation of the types of exam problems you will see on the exam. These are the kinds of questions that will be asked.
- For a particular method/skill you have learned, apply this method to a particular data set.
- e.g.: Construct a decision tree.
There are also questions covering advantages/disadvantages of certain methods, and when and how they should be used. These questions might be different than what we've seen on the assignments.
Each of the 5-6 topic areas will have 2-3 questions.
Are the following statements tautologies.
Prove something using entailment and resolution.
Translate a sentence into propositional logic
Calculate a probability for a Bayesian Network
Are variables A and B independent.
Compile a joint probability table.
Decision Trees
The syllabus covers the rules of the exam.
- Open textbook.
- Open printed notes.
- You can bring print-outs of the assignments and their solutions.
- You can write notes on a tablet, and bring the print-out as a PDF.
- You can bring print-outs of the lecture slides.
- You can bring a calculator.
The exam will have an exit poll at the exit doors, with two questions:
- One will ask about the length of the exam.
- One will ask about the difficulty of the exam.
Everything you have seen so far can be seen, up to "Using Strips"
Preparing for the exam.
- Completing the assignments is the best way to prepare for the exam.
The midterm and the final exam, combined, represent 70% of your grade for the course.
A small set of the assignment questions are too difficult, particularly, probability questions in which the format is "Find an example that demonstrates $X$."
Intelligent agents will be covered.
Strips will be covered.
Implication and Equivalence
$x \vDash y \iff (x \implies y) \land (y \implies x)$
$x$ and $y$ are equivalent ($x \equiv y$) if and only if whenever $x$ is true, $y$ is also true, and whenever $x$ is false, $y$ is also false.
The entailment symbol $\vDash$ is not a logical symbol, and is not part of propositional logic.
Bayesian Network
The Markov boundary of a node $A$ in a Bayesian network is the set of nodes composed of $A$'s parents, $A$'s children, and $A$'s children's other parents.
Prove B and C are conditionally independent, given A.
- Show that any one of the following equalities are true.
- $P(B,C \mid A) = P(B \mid A) * P(C \mid A)$
- $P(B \mid A) = P(B \mid A, C)$
- $P(B \mid A) = P(C \mid A, B$
Apply Forward Chaining and Backward Chaining, there
For backward chaining, only one rule applied, such as depth first search, or backtracking, to show the query is entailed.
Forward & Backward Chaining
- FC is data driven. E.g. object recognition, routine decision
- FC may do a lot of work irrelevant to the goal
- BC is goal driven. Appropriate for problem solving. I.e.
- Where is home? What's the result of equation x?
- Complexity of BC can be much less than linear size of KB
Forward-chaining algorithm:
- Starting from the known facts
- Trigger all the rules whose premises are satisfied, adding their conclusions to the known facts.
- Repeat until the query is satisfied, or until no new facts are added.
Backward-chaining algorithm:
*